Tag: Technology

Elon Musk’s 5 Steps to Optimize Product Development

There is a concept in rocketry called the tyranny of the rocket equation. NASA astronaut Don Pettit uses it to describe a ruthless constraint: every kilogram of unnecessary mass you add to a rocket demands exponentially more propellant to carry it. The tyranny isn’t a design failure. It’s physics. And the lesson isn’t really about rockets — it’s about what happens when teams don’t deeply understand the constraints they’re operating inside before they start building.

That understanding is what separates teams that iterate toward the right answer from teams that get faster and faster at building the wrong thing.

Elon Musk has two mental models worth understanding together. The first is first principles thinking — the foundation. The second is a 5-step engineering process he calls “the algorithm.” Neither works without the other.

First, Understand Your Constraints

Before any of the process matters, a team has to know what it’s actually trying to accomplish and what it’s up against. First principles thinking starts there.

Most teams reason by analogy. We look at what the previous version of the product did, what competitors are shipping, what worked last quarter, and we iterate from there. The problem isn’t the iteration — it’s that the starting point was never examined. We inherit constraints we never questioned and optimize inside boundaries we never chose.

Musk’s framing is to boil a problem down to its most fundamental truths and reason up from there. When he wanted to build rockets, he found they cost $65 million. Rather than accept that as a constraint, he asked what a rocket is actually made of and what those materials cost. The gap between raw material cost and finished market price wasn’t physics — it was accumulated assumption. That gap became SpaceX.

For product teams, the equivalent question is: what are the fundamental measures of success for this product, and what is the real relationship between them? Not the proxy metrics. Not the dashboard your team inherited. The actual truths underneath. What does the user need to accomplish? What does success look like at the level of physics — the irreducible thing you are trying to do?

If a team can’t answer that clearly, no amount of process will save them. They’ll just optimize inside the wrong constraints.

The Algorithm

Once you understand your constraints and your fundamental measures, the algorithm is how you work toward them. The steps have to happen in order — that’s the whole point.

Step 1: Make the requirements less dumb.

Every requirement is wrong. It doesn’t matter who gave it to you — in fact the smarter the person, the more dangerous their requirement, because you’re less likely to push back. Musk’s rule: every requirement must be owned by a name, not a department. You can ask a person why something exists. You cannot ask a department.

Product teams accumulate requirements the same way rockets accumulate mass — incrementally, with good intentions, over time. The PRD inherits from the last PRD. Nobody re-examines the premise. The discipline of step 1 is to trace every requirement back to a person and ask them directly: is this still true?

Step 2: Delete the part or process.

The organizational default is to add. Add a step, add a check, add a field, add a ceremony. We add things “just in case,” and once something exists it tends to stay. Musk’s rule of thumb: if you’re not adding back at least 10% of what you deleted, you didn’t delete enough. The best part is no part.

This is where teams find the room to actually move. Every unnecessary step in a workflow, every feature nobody uses, every approval that exists because of a requirement nobody owns — that’s mass you’re carrying on every subsequent release. Delete aggressively, then see what you actually needed.

Step 3: Simplify and optimize.

This is the step most teams go to first. It’s also the most common mistake. Musk is direct: the most common error of a smart engineer is to optimize something that should not exist. Steps 1 and 2 exist precisely to prevent you from doing elegant, rigorous work on the wrong problem.

This is where the relationship between your fundamental measures matters. If you haven’t defined what good actually looks like — the real measure, not the proxy — you’ll optimize toward the wrong thing with great precision.

Step 4: Accelerate cycle time.

Now that you’re working on the right thing, shorten the feedback loop. Get to users sooner. Iterate faster. But Musk is explicit about the sequence: not before step 3. His line is worth keeping: if you’re digging your own grave, don’t dig it faster. Velocity without prior ruthless deletion just ships the wrong thing more efficiently.

Faster cycles only compound your learning if you’re learning the right things. Which brings us to the most underappreciated part of this framework.

Step 5: Automate.

Last. Musk admits he made this mistake himself on the Model 3 — he went backwards through all five steps, automating before deleting. Automation is a multiplier. If what you’re multiplying is still full of unexamined requirements and undeleted steps, you’ve automated a problem. Make sure it’s the right thing first.

From Good to Better to Best

The real value of this framework isn’t efficiency — it’s the shape of the learning it produces.

Most product teams think about progress as shipping features. The better mental model is moving from a good solution to a better solution to the best solution — and understanding clearly where you are in that progression at any given moment.

Step 1 and step 2 force you to confront what good actually means before you commit resources. Step 3 builds the good solution — stripped of everything that shouldn’t be there, optimized for what matters. Step 4 accelerates how fast you can learn whether your good solution is actually good, and what better looks like. Step 5 locks in what’s working so the team can direct its attention to the next problem.

This is what Pettit’s rocket equation is really pointing at. Every unnecessary thing you carry increases the cost of getting to the next stage — exponentially. The teams that get from good to better to best fastest are the ones who are most ruthless about what they’re carrying and most clear about where they’re trying to go.

The framework is simple. The discipline to apply it — especially when it means telling a smart person their requirement is wrong, or deleting something that took two sprints to build — is rare.

That’s the hard part. The framework is the easy part.

Can you serve others better than you serve yourself

I have thought about this a great deal. The degrees of abstraction are endless. I have thought about it from a systematic perspective, from a social perspective, from a business perspective and so on. The answer I come to consistently is NO. Now let me explain.

The guiding point is: You cannot serve others any better than you serve yourself. I am not saying, if I want my friend to have a nice car, I must first have a nice car. Nope, what I am saying is, if I want my friend to have a nice car, I must first have the ability to give a nice car. At a personal level the actions we take that define us, are the foundations of our interactions with others. The personal side goes down a deep meta rabbit hole and is best left for another time.

The part that is more plain is from a business and systematic perspective. I had previously written “A provider cannot deliver a continuity of experience greater than the continuity of experience the provider has internally.”

You cannot manage a customers inventory any better than you can manage your own (definitely if you are using the same systems, people, and processes).

Do you think Ford could build cars for toyota better than they could build Fords, uh Nope.

Do you think that the U.S. can run a country any better than we run the U.S., uh Nope, just look at Puerto Rico (Usually worse).

If you have variability in your business process when you share those processes with your customer, guess what they get the same degree of variability.

If your email system sucks when you use it, it will suck when you host it out for your customers to use.

Do you think that Google employees have better mail services than Gmail users, I bet they do, but all services being equal I bet its darn close.

The reality is the systems, people and processes we use internally will never generate better results just because your using them on someone’s behalf.

Simple shifts

As the understanding of the web matures, new unique perspectives of the web are going drive innovation. In the beginning having the web was the driving force of innovation (The Bubble was a side effect), and by all measures that type of innovation is still going on today. Now after more than a decade of the web we are seeing innovation based on using the web in an unique and innovative ways.

Companies like SalesForce.com (CRM, business applications), Google (mail, chat, calendar, marketing and search) are delivering services via the web that were previous costly and difficult to manage. Companies like 37signals (Basecamp for Project management) and Zimbra are simplifying and bringing together information and applications in meaningful ways. More and more simple shifts in thinking about how the web can be used or generally how the network can be used are having profound effects on technology and the community of companies and people on and off the network.

Buy a book and some processor cycles

Amazon is now offering Amazon Elastic Compute Cloud (EC2). This goes along with other service offerings. I really like the business strategy statement in the the Amazon 2005 annual report but the service part seems kinda just tacked on:

Our business strategy is to relentlessly focus on customer experience by offering our customers low prices,
convenience, and a wide selection of merchandise, to provide e-commerce solutions and services to other
businesses and to offer web services applications to developers.

(my emphasis)

They offer services but (via Amazon Web Services Licensing Agreement):

AMAZON WILL NOT BE LIABLE FOR ANY DAMAGES OF ANY KIND ARISING FROM YOUR USE OF, OR INABILITY TO USE, AMAZON WEB SERVICES, INCLUDING, BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, PUNITIVE, CONSEQUENTIAL OR EXEMPLARY DAMAGES, INCLUDING WITHOUT LIMITATION LOST DATA, BUSINESS OR ANTICIPATED PROFITS. CERTAIN JURISDICTIONS DO NOT ALLOW LIMITATIONS ON IMPLIED WARRANTIES OR THE EXCLUSION OR LIMITATION OF CERTAIN DAMAGES, AND SO SOME OR ALL OF THE ABOVE DISCLAIMERS, EXCLUSIONS, OR LIMITATIONS MAY NOT APPLY TO YOU.

But don’t worry if you die or get personally injured:

THIS LIMITATION OF LIABILITY DOES NOT APPLY TO LIMIT AMAZON’S LIABILITY FOR DEATH OR PERSONAL INJURY TO THE EXTENT ONLY THAT IT ARISES AS A RESULT OF THE NEGLIGENCE OF AMAZON OR OF ITS EMPLOYEES, AGENTS OR AUTHORIZED REPRESENTATIVES.

Amazon continues to offer cool services to developers at affordable prices. So check out Amazon Elastic Compute Cloud (EC2)

Update: My Feed management services RFP

FeedBurner wins hands down.

It looks like no other feed management service has the stones to take them on. The FeedBurner reply was the only response I received. I saw traffic from multiple companies that offer Feed management services but none replied to my RFP. I will not mention any names of the lurking companies since they didn’t want to reply.

So In the coming weeks I will be migrating my feeds over to FeedBurner. I am happy to be moving my business to a Chicago based company. FeedBurner did meet most of my requirements though I prefer to use Google analytics for my web traffic analysis. Mint looks cool I will have to do an RFP for Analytics.

I will provide a more detailed post on FeedBurner later.

An estimated 168 million Americans lack broadband access at home!

A Reuters article (via News.com) points out:

An estimated 42 percent of Americans had high-speed Internet access at home in March 2006, according to the Pew Internet & American Life Project. That was up from 30 percent of Americans with high-speed access one year earlier, it said.

Wow, 168 million people do not have Broadband at home. The funny thing is many (technologists, the technorati, and the like) refer to the network as being ubiquitous. Its not, and given the cost issues ($49.00 for Comcast) and the political BS around the last mile and now net neutrality, its likely that many may never get Broadband. 168 million, in that number lurks the digital divide, and we as a country don’t seem to be doing much about it. Anybody want to guess how many children are deprived the benefit of the information super highway. Dial-up doesn’t count in my book either. Sad very Sad.

Hints of Opportunity

Jon Udell writes in A new breed of highly-available serverless applications:

Amazon’s S3/SQS duo is a green field that invites entrepreneurs to think way outside the box.

I have already proposed prototypes that can take advantage of these services. Amazon will not be the only provider of distributed storage or messaging services (see cleversafe). These services plus the services from Google are just the beginning of a whole class of services that will drive innovation. Start-ups will be able to take advantage of the more efficient cost model and the increased flexibility. I also agree with Jon that SPDADE applications are going to become even more powerful as they integrate with services like S3 and SQS.

Check out Amazon’s S3 and SQS and let your mind run wild.

Link Summary:

Jon Udell’s RSS feed
A new breed of highly-available serverless applications

Amazon Simple Storage Service (Amazon S3)
Amazon Simple Queue Service (Amazon SQS)

Check out Dabble DB from Smallthought

I was reading a post from Tim Bray about Dabble DB by Smallthough. So I went and watched the screen cast demo they have up on the site. Wow, very cool. Dabble is a collaborative data management, authoring, and publishing web application (I know that description doesn’t do Dabble DB justice). The application lets you copy and past spreadsheet data into the app. It lets you create associations not explicitly present in the original data. It lets you save views of the data. It publishes data in RSS and a lot more.

Just go and check it out, you will be impressed.

Links:

Tim Bray ongoing RSS Atom Feed
Dabble DB, Check It Out

Smallthough RSS Feed
Dabble DB

The change in corporate technology ecosystems

I again was listening to the Grand Central Gang from the Gillmor gang. My only comment on the whole podcast is simply the choice in changing software platforms is not solely based on the technology. In my experience significant change in corporate technology ecosystems is heavily influenced by its IRR and if it is significantly greater than the IRR of current solution. There are many innovative technologies that get adopted slowly because no one is able to produce a cash flow analysis that can move the company into action.

As geeks we sometimes see the potential in technology but the realization of that potential usually trails significantly. This is due in some part to the inability of us geeky folk to relate the technology to the business. In addition to our geekyness corporations (read large) like to have projects that have high batting averages (read no failures). Even more limiting is the corporate desire for not only high batting averages but high power numbers (read no failures and big returns). Short term thinking of many middle managers adds to the ideas of no failures and big returns.

This is why we see time and time again small upstarts using technology to redefine a market and beat established companies.

Links:

Gillmor Gang RSS Feed
Grand Central Gang