108470 éléments (108470 non lus) dans 10 canaux
This is the promised second part of my attempt to decide if IBM’s recent large U.S. layoff involves age discrimination in violation of federal laws. More than a week into this process I still can’t say for sure whether Big Blue is guilty or not, primarily due to the company’s secrecy. But that very secrecy should give us all pause because IBM certainly appears to be flaunting or in outright violation of several federal reporting requirements.
I will now explain this in numbing detail.
SEE ALSO: Is IBM guilty of age discrimination? -- Part one
Regular readers will remember that last week I suggested laid-off IBMers go to their managers or HR and ask for statistical information they are allowed to gather under two federal laws -- the Age Discrimination in Employment Act of 1967 and the Older Worker Benefit Protection Act of 1990. These links are to the most recent versions of both laws and are well worth reading. I’m trying to include as much supporting material as possible in this column both as a resource for those affected workers and to help anyone who wants to challenge my conclusions. And for exactly that reason I may as well also give you the entire 34-page separation document given last month to thousands of IBMers. It, too, makes for interesting reading.
For companies that aren’t IBM, reporting compliance with these laws is generally handled following something very much like these guidelines from Troutman Sanders, a big law firm from Atlanta. What Troutman Sanders (and the underlying laws) says to do that IBM seems to have not done comes down to informing the affected workers over age 40 of the very information I suggested last week that IBMers request (number of workers affected, their ages, titles, and geographic distribution) plus these older workers have to be encouraged to consult an attorney and they must be told in writing that they have seven days after signing the separation agreement to change their minds. I couldn’t find any of this in the 34-page document linked in the paragraph, above.
Here’s what happened when readers went to their managers of HR asking for the required information. They were either told that IBM no longer gives out that information as part of laying-off workers or they were told nothing at all. HR tends, according to my readers from IBM, not to even respond.
It looks like they are breaking the law, doesn’t it? Apparently that’s not the way IBM sees it. And they’d argue that’s not the way the courts see it, either. IBM is able to do this because of GILMER v. INTERSTATE/JOHNSON LANE CORP. This 1991 federal case held that age discrimination claims can be handled through compulsory arbitration if both parties have so agreed. Compulsory arbitration of claims is part of the IBM separation package. This has so far allowed Big Blue to avoid most of the reporting requirements I’ve mentioned because arbitration is viewed as a comparable but parallel process with its own rules. And under those rules IBM has in the past said it will (if it must) divulge some of the required information, but only to the arbitrator.
I’m far from the first to notice this change, by the way. It is also covered here.
So nobody outside IBM top management really knows how big this layoff is. And nobody can say whether or not age discrimination has been involved. But as I wrote last week all the IBMers who have reported to me so far about their layoff situation are over 55, which seems fishy.
IBM has one of the largest legal departments of any U.S. company, they have another army of private lawyers available on command, they’ve carefully limited access to any useful information about the layoff and will no doubt fight to the finish to keep that secret, so who is going to spend the time and money to prove IBM is breaking the law? Nobody. As it stands they will get away with, well, something -- a something that I suspect is blatant age discrimination.
The issue here in my view is less the precedent set by Gilmer, above, than the simple fact that IBM hasn’t been called on its behavior. They have so far gotten away with it. They are flaunting the law saying arbitration is a completely satisfactory alternative to a public court. Except it isn’t because arbitration isn’t public. It denies the public’s right to know about illegal behavior and denies the IBM workforce knowledge necessary to their being treated fairly under the law. Arbitration decisions also don’t set legal precedents so in every case IBM starts with the same un-level playing field.
So of course I have a plan.
IBM’s decision to use this particular technique for dealing with potential age discrimination claims isn’t without peril for the company. They are using binding arbitration less as a settlement technique than as a way to avoid disclosing information. But by doing so they necessarily bind themselves to keeping age discrimination outside the blanket release employees are required to sign PLUS they have to work within the EEOC system. It’s in that system where opportunity lies.
Two things about IBM’s legal position in this particular area: 1) they arrogantly believe they are hidden from view which probably means their age discrimination has been blatant. Why go to all this trouble and not take it all the way? And 2) They are probably fixated on avoiding employee lawsuits and think that by forestalling those they will have neutralized both affected employees and the EEOC. But that’s not really the case.
If you want to file a lawsuit under EEOC rules the first thing you do is charge your employer. This is an administrative procedure: you file a charge. The charge sets in motion an EEOC investigation, puts the employer on notice that something is coming, and should normally result in the employee being given permission by the EEOC to file a lawsuit. You can’t file a federal age discrimination lawsuit without EEOC permission. IBM is making its employees accept binding arbitration in lieu of lawsuits, so this makes them think they are exempt from this part, BUT THERE IS NOTHING HERE THAT WOULD KEEP AN AFFECTED EMPLOYEE FROM CHARGING IBM. Employees aren’t bound against doing it because age discrimination is deliberately outside the terms of the blanket release.
I recommend that every RA’d IBMer over the age of 40 charge the company with age discrimination at the EEOC. You can learn how to do it here. The grounds are simple: IBM’s secrecy makes charging the company the only way to find out anything. "Their secrecy makes me suspect age discrimination" is enough to justify a successful charge.
What are they going to do, fire you?
IBM will argue that charges aren’t warranted because they normally lead to lawsuits and since lawsuits are precluded here by arbitration there is no point in charging. Except that’s not true. For one thing, charging only gives employees the option to sue. Charging is also the best way for an individual to get the EEOC motivated because every charge creates a paper trail and by law must be answered. You could write a letter to the EEOC and it might go nowhere but if you charge IBM it has to go somewhere. It’s not only not prohibited by the IBM separation agreement, it is specifically allowed by the agreement (page 29). And even if the EEOC ultimately says you can’t sue, a high volume of age discrimination charges will get their attention and create political pressure to investigate.
Charging IBM is it can be done anonymously. And charges can be filed for third parties, so if you think someone else is a victim of age discrimination you can charge IBM on their behalf. This suggests that 100 percent participation is possible.
What will happen if in the next month IBM gets hit with 10,000 age discrimination charges? IBMers are angry.
Given IBM’s glorious past it can be hard to understand how the company could have stooped so low. But this has been coming for a long time. They’ve been bending the rules for over a decade. Remember I started covering this story in 2007. IBM seems to feel entitled. The rules don’t apply to them. THEY make the rules.
Alas, breaking rules and giving people terrible severance packages is probably seen by IBM’s top management as a business necessity. The company’s business forecast for the next several quarters is that bad. What IBM has failed to understand is it was cheating and bending rules that got them into this situation in the first place.
Is IBM guilty of age discrimination in its recent huge layoff of US workers? Frankly I don’t know. But I know how to find out, and this is part one of that process. Part two will follow on Friday.
Here’s what I need you to do. If you are a US IBMer age 40 or older who is part of the current Resource Action you have the right under Section 201, Subsection H of the Older Worker Benefit Protection Act of 1990 (OWBPA) to request information from IBM on which employees were involved in the RA and their ages and which employees were not selected and their ages.
Quick like a bunny, ask your manager to give you this information, which they are required by law to do.
Then, of course, please share this information anonymously with me. Once we have a sense of the scope and age distribution of this layoff I will publish part two.
Back in the spring of 2012 Congress passed the Jumpstart Our Business Startups Act (the JOBS Act) to make it easier for small companies to raise capital. The act recognized that nearly all job creation in the US economy comes from new businesses and attempted to accelerate startups by creating whole new ways to fund them.
The act required the United States Securities and Exchange Commission (SEC) by the end of 2012 to come up with regulations to enable the centerpiece of the act, equity crowd funding, which would allow any legal US resident to become a venture capitalist. But the regulations weren’t finished by the end of 2012. They weren’t finished by the end of 2013, either, or 2014. The regulations were finally finished on October 30, 2015 -- 1033 days late.
And the crowd funding industry they enabled looks very different from the one intended by Congress. For most Americans and even most American startups, equity crowdfunding is not likely to mean very much and I think that is a shame. And ironically, whatever success US startups have in equity crowdfunding is more likely to happen overseas than in our own country.
Just for the record, what we are talking about here is equity crowdfunding -- buying startup shares -- not the sort of crowdfunding practiced by outfits like Kickstarter and IndieGoGo where customers primarily pay in advance for upcoming startup products.
The vision of Congress as written in the JOBS Act was simple -- there had to be an easier and cheaper way for startups to raise money and the American middle class deserved a way to participate in this new capital market. Prior to the act only "qualified investors" -- individuals with a net worth of $1 million or more or making at least $200,000 per year -- were allowed to invest in startups, so angel investing was strictly a rich man’s game. The JOBS Act was meant to change that, creating new crowd funding agencies to parallel venture capital firms, broker-dealers, and investment banks and allowing regular people to invest through these new agencies.
But equity crowd funding under Title III of the Act -- crowd funding for regular investors -- was controversial from the start, which may help explain why it has taken so long to happen. Once the act was signed into law in April, 2012 the issue of potential fraud took center stage. Equity crowd funding with its necessarily relaxed reporting and investor qualification requirements looked to some people like a scam in the making. Having barely survived the financial crisis of 2008 promulgated by huge financial institutions, were we ready to do it all over again, but this time at the mercy of Internet-based scam artists? That was the fear.
The real story is, as always, more complex. Equity crowd funding promised to take business away from the SEC’s longtime constituents -- investment banks and broker-dealers. The idea was that an entirely new class of financial operatives would come into the market, potentially taking business away from the folks who were already there. And since the SEC placed power to organize and administer equity crowd funding in the Financial Industry Regulatory Authority (FINRA) -- a private self-regulatory agency owned by the stockbrokers and investment bankers it regulates -- the very people whose income was threatened by a literal interpretation of the JOBS Act -- no wonder things went pretty much to Hell.
There are problems with Title III equity crowd funding for all parties involved -- investors, entrepreneurs, and would-be crowd funding portals. What was supposed to be a simple funding process now has 686 pages of rules and those say that unqualified investors with an annual income or net worth of less than $100,000 can invest in all crowdfunding issuers during a 12-month period the greater of $2,000 or five percent of your annual income and individuals with an annual income or net worth of $100,000 or more can invest 10 percent of annual income, not to exceed $100,000 per year. And these investments have to be in individual startups (no crowdfunding mutual funds).
For entrepreneurs the current maximum to be raised is $1 million and that has to be finished within 12 months plus there are more financial qualification expenses than for raising money from traditional VCs. Crowd-funding portals end up having more liability than they’d probably like for the veracity of the companies they fund making it possibly not worth doing for raises below $1 million.
Equity crowdfunding starts on May 15 so we won’t really know how it plays until after then, but the prospects for Title III aren’t good. Angel funding (Title II) has no such investment or total raise limits and there’s also a Title IV mini-IPO that allows traditional broker-dealers to raise up to $50 million. Plus there is the simple fact that VCs don’t like crowded cap tables and if you expect your startup to need more money from them then Title III might work against you.
In the long run I suspect that the SEC and FINRA caution will drive equity crowd funding offshore. The US isn’t the only country doing this and some of the others are both ahead in the game and more innovative, especially the UK and Israel. So if I want to raise a crowdfunding mutual fund to invest in a basket of American tech startups, which makes all the sense in the world for a guy like me to do, I’d do it in London or Tel Aviv, not in the USA even though most of my investors might still be Americans.
Back in 1998 I spoke to the National Association of State and Provincial Lotteries, explaining to them the potential of the Internet as a gambling platform. The lottery officials were astounded to learn that they couldn’t stop the Internet at their state lines. I recommended they embrace the new technology and become gambling predators. Of course they didn’t and a robust international Internet gambling industry was the result. By moving so slowly and erring too much on the side of protection I fear the SEC and FINRA will guarantee a similar result for equity crowdfunding.
Photo credit: beeboys / Shutterstock
I promised a follow-up to my post from last week about IBM’s massive layoffs and here it is. My goal is first to give a few more details of the layoff primarily gleaned from many copies of their separation documents sent to me by laid-off IBMers, but mainly I’m here to explain the literal impossibility of Big Blue’s self-described "transformation" that’s currently in process. My point is not that transformations can’t happen, but that IBM didn’t transform the parts it should and now it’s probably too late.
First let’s take a look at the separation docs. Whether you give a damn about IBM or not, if you work for a big company this is worth reading because it may well become an archetype for getting rid of employees. What follows is my summary based on having the actual docs reviewed by several lawyers.
IBM employees waive the right to sue the company. The company retains the right indefinitely to sue the employee. IBM employees waive the right to any additional settlement. Even if IBM is found at fault, in violation of EEOC rules, etc., employees will not get any more money. The agreement is written in a way that dictates how matters like this will be determined in arbitration.
There is no mention of unemployment claims. Eligibility for unemployment compensation is determined and managed by each state. Each state has rules on who is qualified, the terms and conditions, etc.. Some companies in some states have been known to report terminations in a way that disqualifies workers, thus saving the company money on unemployment insurance premiums. Some states have appeal processes. In others you may have to appeal the response with your former employer, which is of course the same bunch who just denied you (good luck with that).
IBM is being very opaque here about their process. Maybe it is hoping former IBMers won’t even think to apply for unemployment benefits. But if IBM takes a hardline position, the arbitration process and legal measures required would probably discourage many former employees from even trying. The question left unanswered then is how many of these folks will be able to receive their full 99 weeks of benefits?
The only way for employees to get more money or a better settlement is for their state or the federal government to sue IBM. In a settlement with a government, IBM could be made to pay its RA’d employees more.
There’s a final point that is being handled in different ways depending on the IBM manager doing the firing. It appears managers are being strongly urged to have their laid-off employees take their accrued vacation time prior to their separation date. Some managers are saying this is mandatory and some are not. From a legal standpoint it’s a bit vague, too.
Are they legally allowed to MAKE employees use their vacation time before separation? According to the lawyers I consulted, that depends on each person’s situation. If they have no work to do, then they may be required to use their vacation. If they are busy with work, then IBM can’t make them eat it. The distinction is important because IBM has been so busy in the past denying employees their vacation time that what’s accrued is in many cases more time than the puny 30 day severance.
To put this vacation pay issue in context, say you are a recently RA’d IBMer given three months' notice, a month’s severance pay, and you have four weeks of accrued vacation time. If you are forced to take that vacation during the three months before your separation date, well that’s four weeks less total pay. If the current layoff is around 20,000 people as I imagine, that could be 20,000 months, 1,667 man-years and close to $200 million in savings for IBM based on average employee compensation.
I wonder who got a bonus for thinking-up that one?
What if laid-off IBMers don’t sign, what happens then? They retain the right to sue, but lose their jobs without any of the separation benefits.
Another question that arises from my contact with IBMers who have been laid-off is whether there is age discrimination in effect here. Of the laid-off (not retiring -- this is key) IBMers who have contacted me so far, 80 percent are over 60 and 90 percent are over 55. Now that could say more about my readers than about IBM’s employees, but if the demographics of the current IBM layoff differ greatly from the company’s overall labor profile that could suggest age discrimination.
I’m thinking of doing an online survey to help find out. Is that something you, as readers, think I should do?
Now what’s the impact of all this on the company? There’s anger of course and that extends to almost every office of every business unit since the layoffs are so broad and deep. Employees are so angry and demoralized that some are supposedly doing a sloppy job. I have no way of knowing whether this is true, but do you want one of those zombie employees (ones being fired in 90 days) writing code for your mission-critical IBM applications?
But wait, there’s more! At least the workers being laid-off have some closure. Some of the ones not picked this time are even more demoralized and angry because they have to stay and probably become part of some subsequent firing that will offer zero weeks severance, not four weeks. One reader’s manager actually told them that they were fortunate to be picked this time for exactly that reason.
All this turmoil hasn’t gone down without an effect on IBM managers, either, many of whom see their own heads on some future chopping block. I have been told there are many managers trying to justify their existence by bombarding their remaining employees with email newsletters and emails with links to "read more on my blog".
Readers report being swamped with so many of these it’s hurting productivity. Not to mention they are being asked to violate the company security policy by clicking on the email link -- an offense that could lead to termination. Remember this is all happening in the name of IBM’s "transformation".
What about that transformation, how is that going and -- for that matter -- what does it even mean?
Having read all the IBM press releases about the current transformation, listened to all the IBM earnings conference calls about it, and talked about it to hundreds of IBMers, it appears to me that this is not a corporate transformation at all, but a product transformation.
Every announcement is about a shift in what IBM is going to be selling. Whole divisions are being sold, product lines condensed and renamed, but it’s all in the name of sales. That is not corporate transformation. Maybe the belief is that IBM as a corporation doesn’t need to be transformed, that it’s a well-oiled money machine that just needs a better product mix to regain its mojo. Alas, that is not the case.
Last week I presented, but then didn’t refer to an illustration of the generally-accepted corporate life cycle. Here it is again:
And here’s a slightly different illustration covering the same process:
It’s the rebirth section I’d like you to think about because this is what Ginni Rometty’s IBM is trying to do. They want to create that ski jump from new technologies and use it to take the company to new heights. Ginni the Eagle. Our challenge here is to decide whether that’s possible.
If we accept that this second chart is the ideal course for a mature business, when is the best stage for building the ski jump and where does IBM fit today on that curve? It’s not at all obvious that the best place to jump from is the start of decline as presented.
Many business pundits suggest that the place to start is early in the maturity phase (Prime in the earlier chart), before the company has peaked. IBM certainly missed that one, so let’s accept that early decline (between Stability and Aristocracy) is okay. But is IBM in early decline or late decline, are they in Aristocracy, Recrimination or even Bureaucracy? I’d argue the latter.
IBM today has 13 layers of management, four layers of which were added by Ginni Rometty’s predecessor, Sam Palmisano. I don’t want to be too hardcore, but 13 layers is too many for any successful company. That alone tags IBM as being in late bureaucracy, rather like the Ottoman Empire around 1911. So by this measure IBM is probably too far along in its dotage to avoid dying or being acquired.
It’s important to note that despite a very large number of executive retirements (a different/better pension plan?) none of the IBM transformation news has so far involved simplifying the corporate structure. No eliminating whole management levels or, for that matter, reducing management at all.
This is not to say IBM can’t learn from its mistakes, it’s just it doesn’t always learn as much as it should or sometimes even learn the wrong lessons. We’ve seen some of this before. In the late 80’s and early 90’s when John Akers was CEO IBM’s business was changing, sales were dropping, and the leadership at the time was slow to cut costs.
Instead they increased prices, which further hurt sales and accelerated the loss of business. It was a death spiral that eventually led to desperate times, Akers’ demise, and the arrival of IBM’s first outsider CEO, Lou Gerstner from American Express. The lesson IBM learned from that was to put through massive cost cuts AHEAD of the business decline. Which of course today is again accelerating their loss of business.
In both the early 1990s and today IBM has shown it really doesn’t understand the value of its people. Before Gerstner people = billable hours = lots of revenue. Little or no effort was expended to improve efficiency, productivity, to automate, etc. The more labor-intensive it was to do something, the better.
This changed somewhat with Sam Palmisano, who refined the calculation to people = cost = something expendable that can be cut. In most companies with efficient and effective processes they can withstand serious cuts and continue to operate well. IBM’s processes are not efficient and the staff cuts are debilitating.
In both cases IBM needed to transform its business -- an area where IBM’s skills are quite poor. It takes IBM a ridiculous amount of time to make a decision and act on it. While most of today’s CAMSS is a sound plan for future products and services, IBM is at least five-10 years late bringing them to market. While most of the world has learned to develop new products and services and bring them to market faster (Internet time), IBM still moves at its historic glacial pace.
This slow pace of management is not just because there are so many layers, but also because there is so much secrecy. IBM does not let most employees -- even managers -- manage or even see their own budget. IBM does not let them see the real business plan, either. Most business decisions are made at the senior level where the decision makers can’t help but be out of touch with both the market and their employees. How can anyone get anything done when senior executives must approve everything?
So if you can’t make quick decisions about much of anything, how do you transform your product lines? Well at IBM you either acquire products (that’s an investment, remember, rather than an expense and therefore not chargeable against earnings) or you take your old products and simply rename them.
IBM is a SALES company run by salespeople. It takes brains and effort to improve a product or create a new one. However if you can take an old product, rename it, and sell it as something new then no brains or effort are needed.
IBM products change names every few years. A few years ago several product lines became Pure. One of these Pure products was a family of Intel servers with some useful extra stuff. When it sold the X-Series business what happened to the Pure products? Nobody appears to know. Pure was just a word and no one was really managing the brand. It was up to each business unit to figure out what to do with its Pure products. Could IBM sell Pure servers? Did it even still make Pure servers? I still don’t know.
Remember On-Demand and Smarter-Planet? Marketing brain farts. New products do not magically appear with each new campaign. It is mostly rebranding of existing products and services.
IBM is now renaming its middleware software stack, for example. Everything has Connect in it now, but it’s lipstick on a pig. They are about to do a major sales push on this stuff to unwitting customers:
They took the mobile product stack out of the cloud area and put it back in middleware, so that’s getting crammed into other stuff too, probably some of the above. Sales reps are being threatened with firing if they don’t meet their quotas this quarter or next so they will be pushing customers hard. None of this new software is well-tested of course.
CAMSS isn’t really new, either, according to one departing IBMer: "Most of the IBM CAMSS products have become a hodge-podge of bolted on code from acquisitions, and we all know how that goes. The products are unstable, bug-laden, feature multiple different administrative UIs (from the bolted-on stuff), don’t scale well if at all, are hard to administer, etc. The SaaS apps that have been shoe-horned into the cloud are badly broken and stripped of major features just to say they are 'there in the cloud'".
IBM may be hiring a lot of people for its new CAMSS lines of business. But in a few years when the services death spiral is complete IBM will be mainly dependent on CAMSS to make money. If CAMSS can not pick up the slack the same people = cost = something expendable that can be cut mindset will kick in. Many of those hired to build CAMSS will be cut and those businesses will begin to founder, too.
IBM’s weakness is obvious to competitors who are swooping in. Microsoft, for example, is porting SQL Server to Linux. The functionality is comparable to Oracle or DB2 and cost savings is significant. This is a brilliant and bold move by Satya Nadella that would never have even occurred to Steve Ballmer or Bill Gates.
For one thing it will force Oracle to become more competitive, lower their prices, and lighten up on their licensing. When Oracle cuts prices, IBM cuts prices. It could be a death blow to DB2 and cause collateral damage to many IBM software products.
Even IBM Analytics aren’t what they seem. In many ways IBM’s analytics business is very much like the early days of computing. You buy the hardware, software, and tools or get them from a cloud service. Many parts are commercial, licensed products so they’re going to cost you some money.
You hire an IBM expert to adapt it to your business and write the code needed to get it to solve your problems. IBM makes money on hardware, software, and billable hours. IBM is not breaking a lot of new ground in this field (don’t get me started on Watson -- it’s even worse). Google, Yahoo, and Facebook pioneered the current generation of big data technology. Many startups have developed tools that can process the data managed by these technologies.
Analytics is a field where academia and open source are competing with commercial efforts. IBM has many very smart data scientists working in the analytics business and they’re doing many interesting things. But then so has the rest of the world. Two leading commercial statistical analysis packages are SAS and SPSS. Both have been on the market since the 1970’s. Both are very good, mature, and cost money to use. SPSS was purchased by IBM a few years ago.
Academia is historically cash tight and finds creative ways to do things without spending lots of money. One of the products of that effort is R, an Open Source statistical analysis tool that competes with both SAS and SPSS. Microsoft recently bought one of the companies that productized R. Can you see where this is going?
What’s left for IBM? Patents. IBM is becoming a patent troll with an army of people dreaming up technology ideas and patenting them not to develop products, but to demand royalties from other companies developing products. Mobile and Social could be mostly a patent troll, designed primarily for IBM to profit off of the work of others.
Its patent portfolio may be the only thing of enduring value in IBM. Just a few days ago IBM went after Groupon. IBM will be jumping on more and more companies in the USA who are trying to develop new products. Innovation could end up going to countries beyond the reach of the IBM legal department. Now there’s an unexpected side-effect.
The lesson in all this -- a lesson certainly lost on Ginni Rometty and on Sam Palmisano before her -- is that companies exist for customers, not Wall Street. The customer buys products and services, not Wall Street. Customers produce revenue, profit, dividends, etc., not Wall Street. IBM has alienated its customers and the earnings statements are showing it. Sam turned IBM against its customers and employees, and started catering instead to Wall Street, which narcissistically loved the idea. Ginni inherited a mess and hasn’t figured out what is happening, why, or how to fix it.
Not an eagle after all.
Avram Miller, who is my friend and neighbor here in rural Sonoma County, wrote a very insightful post on the passing of Andy Grove.
It’s well worth reading.
My own experience with Andy Grove was limited. I knew Bob Noyce and Gordon Moore much better. But I do recall a time when Grove and I were both speakers at a PBS national meeting and sat together.
He corrected my pronunciation of the word Zoboomafoo, the title of a PBS animal series for preschoolers.
You may recall my three sons ran a successful Kickstarter campaign last fall for their $99 Mineserver, a multiuser Minecraft server the size of a pack (not a carton) of cigarettes. On the eve of their product finally shipping here’s an update with some lessons for any complex technical project.
At the time we shot the Kickstarter video my kids already had in hand a functional prototype. Everything seen in the video was real and the boys felt that only producing custom cases really stood in the way of shipping. How wrong they were!
First we needed a mobile app to administer the server so we hired an experienced mobile developer through guru.com. His credentials were great but maybe it should have been a tip-off when, right after we made the first payment, the developer moved from Europe to India. The United States Postal Service guaranteed us it would take no more than five days for our development hardware to reach Mumbai. It took three weeks. And our USPS refund for busting the delivery guarantee has yet to appear.
We were naive. The original development estimate was exceeded in the first week and we were up to more than 8X by the time we pulled the plug. Still the developer kept trying to charge us, eventually sending the project to arbitration, which we won.
Our saving grace was we found a commercial app we could license for less than a dollar per unit. Why hadn’t we done this earlier? Well it wasn’t available for our ARM platform and didn’t work with our preferred Minecraft server, called Cuberite (formerly MC Server).
Enter, stage right, the lawyers of Minecraft developer Mojang in Sweden.
Mojang is a peculiar outfit. They are nominally owned by Microsoft yet Redmond is only very slowly starting to exert control. I’m not sure the boys and girls in Sweden even know that. Minecraft server software is free, Mojang makes its money (lots of money) from client licenses, so they and Microsoft ought to want third-party hardware like ours to be successful. But no, that’s not the case.
I’ve known people at Microsoft since the company was two years old so it was easy to reach out for support. Maybe we could do some co-marketing?
Nope, that’s against Mojang rules, Redmond told us, their eyes rolling at the same time.
We urged them to consider a hardware certification program. We’d gladly pay a small royalty to be deemed Ready-for-Minecraft.
Nope, Mojang refuses to have anything to do with hardware developers. Oh, and both our logo and font were in violation of Mojang copyrights, so change those right away, please.
Swedes are very polite but firm, much like Volvos.
Minecraft, which is written in Java, is nominally Open Source, but there are some peculiar restrictions on distributing the code. The server software can’t be distributed pre-compiled. For that matter it also can’t be distributed even as source code if the delivery vehicle is a piece of operational hardware like our Mineserver. Our box would have to ship empty then download the source and compile it for our ARM platform before the first use, making everything a lot more difficult.
Now let’s be clear, this particular restriction technically only applies to one version of the Minecraft server, usually called Vanilla -- the multiuser server distributed for free directly by Mojang. There are other Minecraft servers that, in theory, we ought to be able to ship with our little boxes except Mojang has all the developers so freaked that nobody does it. Besides, Vanilla is the official Minecraft server and some people won’t accept anything else.
But our experience shows Vanilla Minecraft isn’t very good at all. In fact it is our least favorite server, primarily because it supports only a single core on our four-core and eight-core boxes. As such Vanilla supports the lowest number of concurrent Minecraft players. A better server like Spigot can support 2-3 times as many users as Vanilla.
The best Minecraft server of all in our opinion is Cuberite, which is also the only one written in C++ instead of Java. Cuberite extracts far more performance from our hardware than any other server, which is why we chose to make it our de facto installation. We’ll also support Vanilla, Spigot and Tekkit Lite (you can switch between them), but Cuberite will be the first server to compile on the machine.
The only problem with Cuberite is that the off-the-shelf admin application we discovered doesn’t support it. Or didn’t. The very cooperative admin developer in the UK is extending his product to support Cuberite. This should be done soon and waiting for Cuberite is a major reason why we haven’t shipped. We’re hoping to have it in a few more days.
But waiting for Cuberite wasn’t our only problem. We had to develop a dynamic DNS system, Wi-Fi support, and make sure the units were totally reliable.
Oh, and our laser cutter burst into flames.
Understand that for a Mineserver or Mineserver Pro, the sysadmin also typically goes by another title -- Mom. Our administration tool allows her to control the server from any Internet-connected computer including Android and iOS mobile phones. She can bump or ban players from the frozen food aisle, monitor in-game text chat, reboot the server -- anything. It’s a very powerful and easy-to-use tool.
While we were waiting for Cuberite support we added something else for Mom to worry about, a Mumble server. Mumble is open source voice chat with very low latency. We were able to add Mumble to Mineserver because the CPU load is very low with all encoding and decoding done in the client and the server acting mainly as a VoIP switch. If she wants to, Mama can listen to the Mumble feed and step-in if little Johnny drops an F-bomb.
Every Mineserver has its own individual name chosen by the customer. This server name, rather than an IP address, is how whitelisted players find the game. Consulting with dynamic DNS experts prior to the Kickstarter campaign this sounded easy to do with a combination of A records and SRV records. But it’s not so easy because Mom doesn’t want to have to do port forwarding, so that meant adding other techniques like UPnP, which is tough to do if it’s not turned-on in your router. We eventually developed what the boys believe is a 95 percent solution. In 95 percent of cases it should work right out of the box with the remaining five percent falling on the slim shoulders of some Cringely kid.
Every Mineserver is assembled by a specific child who is also responsible for product support. His e-mail address is right on the case and if something doesn’t work he can ssh, telnet, or VNC into the box to fix it.
Somewhere in this mix of challenges we lost our primary Linux consultant. We still don’t know what happened to him, he just stopped responding to e-mails. The next consultant really didn’t have enough time for us, but finally we found a guy with the help of our admin developer who has been doing a great job. He helped us switch our Linux server distribution with several positive results and helped come up with the custom distro we use today.
But still there were problems, specifically Wi-Fi.
Wi-Fi was something we’d rather not do at all, but it has become the new Ethernet (even Ethernet inventor Bob Metcalfe pretends Wi-Fi is Ethernet, which it isn’t but we still love Bob). Many home networks are entirely Wi-Fi. We feel the best way to use a Mineserver even in Wi-Fi-only homes is by plugging the included CAT6 cable into a router or access point port and using the router’s Wi-Fi capability. But some customers don’t want to plug anything into anything, so we’ve included native Wi-Fi support in some Mineservers and all Mineserver Pros. That sounds easier to do than it actually was.
Mineservers are headless so how do you set an SSID or password the first time? Good question, but one we finally solved. Mineservers can be configured and administered entirely without wires if needed. In most situations the customer will plug-in their Mineserver to power and it will just work. If it doesn’t, then an 11 year-old will fix it. That first power-up will involve downloading and compiling the selected server software, which can be changed at any time. It’s a process that takes 5-10 minutes and then you are up and running.
What we hope is our final technical problem has been particularly vexing. We now have three Mineservers and three Mineserver Pros running at the sonic.net data center here in Santa Rosa. All six servers plus a power strip and a gig-Ethernet switch fit on a one foot square piece of plywood. The truly great folks at Sonic gave us half a rack and we fill perhaps one percent of that, meaning you could probably put 1200 Mineservers in a full rack -- enough to support up to 60,000 players. But operating in this highly-secure facility with its ultra-clean power and unlimited bandwidth we began to notice during testing that sometimes the servers would just disappear from the net. One minute the IP would be there and the next minute it would be gone.
We’re still waiting for Cuberite support of course, but we even if we had that today we still can’t ship a product that disappears from the net. We’ve tried swapping-out boards but the problem still occurs. Maybe it was the gig-Ethernet switch, so we got a new one, then a bigger one, then an even bigger managed switch. We changed cables. We started fiddling with the software. Each Mineserver board has a serial port so we converted an old Mac Mini to Linux, added a powered USB hub and six UART-to-USB adapters so now our consultant in Texas can use six virtual serial terminals to monitor the test Mineservers 24/7 without having to rely on their Ethernet connections. Everything is being logged so the next time one goes down we’ll know exactly what’s happening.
We’re also in touch with other users of the same board like Lockheed Martin and Lawrence Livermore Lab where they have a cluster of 160. But that’s nothing compared to three kids in Santa Rosa who are right now burning-in 500 boards.
It’s the final bug, we’re approaching it with planning, gusto, and plenty of Captain Crunch, and fully expect to solve this last issue and start shipping next week when the kids are off school for Spring Break.
Mineservers as a business so far aren’t quite as good as the boys had hoped. The Kickstarter units are losing an average of $15 each (so far). But $7500 in the hole is not much cost to start a technology business. And with their marketing strategy (called "F-ing brilliant" by a VC friend) the boys are hoping to sell 100+ post-Kickstarter units per month to eventually pay for college.
The FBI holds an iPhone that was owned by one of the San Bernardino terrorists, Syed Rizwan Farook, and wants Apple to crack it. Apple CEO Tim Cook is defying the FBI request and the court order that accompanied it, saying that cracking the phone would require developing a special version of iOS that could bypass passcode encryption. If such a genetically modified mobile OS escaped into the wild it could be used by anyone to crack any current iPhone, which would be bad for Apple’s users and bad for Amurica, Cook says. So he won’t do it, dag nabbit.
That’s the big picture story dominating the tech news this week. However compelling, I’m pretty sure it’s wrong. Apple isn’t defying the FBI. Or at least Apple isn’t defying the Department of Justice, of which the FBI is supposed to be a part. I believe Apple is actually working with the DoJ, which doesn’t really want to compel Apple to do anything except play a dramatic and very political role.
Now for some more details. In order to get their court order the FBI had to tell the judge that its own lab couldn’t crack the phone. Or maybe they said their lab didn’t crack the phone. Nobody knows. But the first question any cynic with technical bones would ask is, "Can’t the CIA/NSA/Steve Gibson, somebody crack that darned iPhone?"
John McAfee, who is one of my absolute favorite kooks of all time says he can do it, no problem, in about a month. McAfee says the FBI is just cheap and unwilling to drop big bucks on the right bad guys to make it happen, which kinda suggests that iPhones have been broken-into before, doesn’t it?
One important point: I know John McAfee and if he says he can do it, he can do it.
SEE ALSO: Poll: Should Apple help the FBI unlock the San Bernardino iPhone?
There’s something that doesn’t smell right here. The passage of time, the characters involved, the urgency of anti-terrorism make me strongly suspect that the innards of that iPhone are already well known to the Feds. If I were to do it I wouldn’t try cracking the phone at all, but its backup on a Mac or PC or iCloud, so maybe that’s the loophole they are using. Maybe they didn’t crack the iPhone because they didn’t have to. Or maybe some third party has already cracked it, leaving the FBI with that old standby plausible deniability.
Let’s drop for a moment the technical arguments and look at the legal side. If you read Fortune on this issue it looks like Apple will probably prevail. The legal basis for compelling Apple to invent a key for a lock that’s not supposed to even exist is flimsy. This does not mean that the FBI couldn’t prevail in some courts (after all, they convinced the judge who issued the original order). But it’s really a legal tossup who wins at this point, or appears to be. Remember, though, that when it comes to lawyers Apple can probably afford better help than can the U.S. Government.
So Apple’s being seen as an unpatriotic pariah with Silicon Valley companies like Facebook and Twitter only in the last few hours finally starting to support Cupertino.
Before I tell you what I think is actually happening here let me add one more piece of data. At the same time the FBI is pushing for unprecedented power to force decryption of devices, another news item appeared this week: Columbia University computer scientist Steve Bellovin has been appointed the first technology scholar for the Privacy and Civil Liberties Oversight Board -- the outfit that oversees these very activities at federal agencies including the NSA, CIA and, yes, the FBI. Up to this point the Board has never had as a voting member someone who actually understands this stuff in real depth. And professor Bellovin does much more than just understand this technology, he’s publicly opposed to it as co-author of a seminal report, Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications. The report, which was published last July by MIT, clearly takes the position that the FBI is wrong in its position against Apple. Not legally wrong -- this isn’t a legal document -- but wrong in terms of proper policy. Just as the Clipper Chip was a bad idea when I wrote about it right here almost 20 years ago, this forced hacking of iPhones is a bad idea, too, or so claims Bellovin.
Would Donald Trump, Ted Cruz, Marco Rubio, or even Jeb Bush have appointed Bellovin to that board? I don’t think so.
So wait a minute. There’s a plenty of reason to believe that Apple complying with the FBI order is bad policy, it’s legally shaky, and at least one of the people who makes the strongest arguments in this direction is now voting on a secret government board? What the heck is going on here?
What’s going on is Justice Antonin Scalia is dead.
Had Justice Scalia not died unexpectedly a few days ago (notably before the Apple/FBI dustup) and had the FBI pursued the case with it landing finally in the Supreme Court, well the FBI would have probably won the case 5-4. Maybe not, but probably.
With Justice Scalia dead and any possible replacement locked in a Republican-induced coma, the now eight-member Supreme Court has nominally four liberal and four conservative justices but at least 1.5 of those conservatives (Justice Kennedy and sometimes Chief Justice Roberts) have been known to turn moderate on certain decisions. This smaller court, which will apparently judge all cases for the next couple years, is likely to be more moderate than the Scalia Court ever was.
So if you are a President who is a lawyer and former teacher of constitutional law and you’ve come over time to see that this idea of secret backdoors into encrypted devices is not really a good idea, but one that’s going to come up again and again pushed by nearly everyone from the other political party (and even a few from your own) wouldn’t right now be the best of all possible times to kinda-sorta fight this fight all the way to the Supreme Court and lose?
If it doesn’t go all the way to the Supremes, there’s no chance to set a strong legal precedent and this issue will come back again and again and again.
That’s what I am pretty sure is happening.
A third of the people who read this column don’t live in the USA so maybe this prediction isn’t interesting to them, but I think Apple will buy Dish Network, the American direct satellite TV broadcaster. It’s the only acquisition that will give Apple the kind of entry point they want into the TV business, allowing Cupertino to create overnight an over-the-top (OTT) Internet streaming video service -- effectively an Internet cable system.
Buying Dish would be a bold move for Apple because all the benefits Cupertino seeks aren’t obviously available. True, Dish has 14 million U.S. subscribers (I am one of those) who get 100+ channels of TV from the sky. True, Dish has an existing OTT streaming service called Sling that already offers a subset of the company’s cable channels. But it doesn’t necessarily follow that Dish could simply transfer its satellite content to the Internet, at least beyond what it does already with Sling.
There is, however, a long tradition of brash TV operators being rewarded for their brashness. The heart of Ted Turner’s first fortune was WTBS, the UHF TV station in Atlanta that grew out of his family’s billboard business. Turner threw WTBS up on a satellite giving his episodes of All in the Family a national reach that to many didn’t seem to be supported by Ted’s syndication contracts. Still, WTBS is a success today, so it worked. And don’t forget how Netflix paid only $25 million per year to Starz back in 2008 for access to 2,500 Disney and Sony movies that would cost hundreds of millions today. Starz was Netflix’s streaming killer app. And Dish will be Apple’s because gaining access to all that content will be worth to Cupertino whatever it costs. Even if it destroys Dish in the process, Apple will succeed, which is exactly why it will succeed.
Dish is for sale. Every company is for sale but Dish is especially so following AT&T’s acquisition of Dish’s main competitor, DirecTV, last year for $48 billion. Apple won’t have to pay that much for Dish but it could. Each Dish employee represents only $800,000 in sales so many of those 19,000 workers will have to go. But if Apple outsources just satellite dish installation and customer support it’ll easily pull the revenue numbers up toward the target $2 million per employee.
One Dish employee Tim Cook will be sure to keep is CEO Charlie Ergen — a tough-yet-charismatic operator who built Dish from scratch and knows his industry. Apple won’t succeed without a Charlie Ergen at the TV controls. Ergen’s already a billionaire but he’ll stick around for a chance to turn TV on its head.
But acquiring Dish would do more than just turn TV and maybe movies on their heads, it would have an impact on mobile phone and data service, too. Dish is one of the largest owners of unused wireless spectrum that can be used as a bargaining chip in those content negotiations or possibly put Apple into the wireless data business. Dish also sells the Slingbox, which allows watching your cable or satellite content over the Internet, so there’s a patent-protected addition to the Apple TV available, too.
Apple wants in the content business but buying Time Warner (not Time Warner Cable) won’t do it. Apple needs hundreds of networks, not just 14 channels of HBO. In order to succeed Apple needs to acquire a company that has legacy rights to that content -- rights that Apple can through sheer force of will ride for a year or two until the market transitions and cable TV starts to die.
Yes, Apple wants to kill cable TV. So do AT&T and Verizon. So do Microsoft and Sony. So did Intel until it gave up. The difference is that Apple’s deep pockets and Dish’s moxy can actually make it happen.
I know I promised that my next 2016 prediction would be Apple’s big acquisition, and I will publish that prediction soon as my #10, but right now I just have to say what a perilous position Intel is in. The company truly risks becoming irrelevant, which is an odd thing to say about a huge, rich outfit that would appear from the outside to pretty much dominate its industry -- an industry the company created. Intel won’t go away, I just think there is a very good chance it’ll no longer matter.
We’re approaching the end of the closed, proprietary, single source technology era. ARM processors are freely licensed, more open, and much more cost competitive than similar products from Intel or AMD. If you need 10 million chips for your next product do you buy them from Intel? Or do you get a license from ARM and hire a foundry to make them for you?
The same can be said of operating systems. Do you go buy 10 million licenses from Microsoft, Apple, IBM, or..? Or do you go get a blanket license for Android from Google?
The interesting questions that will determine the future are:
Will Intel start making ARM chips? It has done it before: remember StrongARM? If Intel doesn’t re-embrace ARM for at least some of its line it will be a much smaller company in a few years.
Microsoft’s CEO seems to be quite smart and a good visionary. I am more optimistic about Microsoft’s future now than I’ve been in years. Will Microsoft start making Android products and applications? Porting Office to Android/ARM will be a better strategic decision than when it ported Office to the Mac.
Back to Intel, the company made a lot of news recently by laying-out a new corporate strategy based on data centers, Internet of Things, and memory -- explicitly de-emphasizing both personal computers (in decline) and mobile (where they haven’t had much success). I think this is smart but unless Intel follows it up with better tech in the very areas ARM has come to dominate the strategy won’t work.
Just to take one example, there’s a huge opportunity in data centers, which is to say building clouds. Most commercial clouds are based on free, very cheap, and/or open source technology. They use a low cost hypervisor. The disk storage is not as fast, secure, or reliable as it needs to be. We’re going to have some technology burps along the way as these deficiencies both become known and are taken advantage of by the bad guys. When that happens we will start to question our needs -- question the cloud.
Intel can fix these problems and move on to even greater heights, but I don’t think it knows how.
At least one reader pointed out that I somehow missed 2016 Prediction #4, so let me throw something in right here. Steve Jobs: The Lost Interview will shortly return to Netflix worldwide!
Our movie was on Netflix in the USA and Canada for a couple of years (it’s still streaming on Netflix in the UK) but the North American deal ended sometime in November when rights reverted from Magnolia Pictures back to John Gau Productions. The film had already disappeared from iTunes and Amazon, etc., but we hadn’t noticed because, well, Magnolia didn’t bother to mention it and we’re only pretending to be movie producers.
You don’t work directly with these streaming outfits if your body of work is one movie made from a VHS tape found on a shelf in somebody’s garage. You go through an aggregator. Once we finally realized that our film had lost distribution (the money was so little and the delay in payment was so long that it was hard to even tell) we set about finding a new aggregator, which we finally gained in Los Angeles-based Bitmax. Still it then takes months to get the movie back up and streaming even places it was running just fine last year. So don’t expect to see the film again on Amazon or iTunes, for example, until sometime this spring. And it probably won’t make Netflix again until summer.
Our Bitmax deal doesn’t involve sharing revenue. We learned our lesson and are paying the company a fixed fee with them passing-through all royalties. It will be a year or so before we know whether this was a smart move or not but for now it feels right.
The new Netflix deal isn’t through Bitmax because they didn’t then have their own deal with Netflix (now they do, apparently), so we extended our non-exclusive deal with the UK aggregator, Filmbuff, to cover the entire Netflix world. Little did we know that Netflix would shortly be adding 130+ countries to their list!
So the movie is returning, but slowly, and we expect it to be around for many years to come. Viewers love it of course and we’re proud to have made something so long ago that holds up so well.
But while I have you on the line let’s talk a little about Netflix, itself. I first met Netflix chief content guy Ted Sarandos at a winery in the hills above Silicon Valley in 1998. It was at a corporate event for MaxStor, the hard drive company (remember them?) and I was the dinner speaker. Pay me a lot of money and I’ll speak for your company, too.
Sarandos was attending for Netflix and after dinner and my 50 minutes of this-and-that we sat at the tasting bar and he told me the Netflix strategy, which was to become exactly what they are today. This was 18 years ago, back when Netflix made all its money delivering DVDs through the Post Office, remember, and didn’t stream video at all. Yet even then streaming and producing original shows was the plan. I was impressed.
And I remain impressed. I’m not here to make any 2016 predictions about Netflix, but I wouldn’t bet against them. To maintain a corporate strategy with such success for more than 18 years is a wonder in high tech. It shows vision, discipline, and luck -- the three components any tech company needs to change the world.
If my last prediction about the Internet of Things becoming a security nightmare seemed a no-brainer to half of my readers, as some commenters suggested, this prediction that Apple won’t buy Time Warner will probably be a no-brainer for the other half, simply because it is always easier to say an acquisition or merger won’t happen than that it will. But I think there is something to be learned from why I don’t think this acquisition will take place -- something that says a lot about Apple as a company.
That this topic comes up at all is because, as frequently happens these days, activist investors are trying to bully Time Warner into selling all or part of itself, this after having already bullied the company into spinning-off its cable TV operation and then its print publishing operation. So now what’s mainly left at Time Warner are cable TV networks, TV and film production and distribution, and a modest online operation. All of this, but especially premium cable channel HBO, is supposed to appeal to Apple’s eye for quality.
The thinking is pretty simple: Apple wants to build an Internet virtual cable TV service and having an HBO exclusive will cement the success of that service.
If it were that simple I’d agree heartily, but there are a couple problems with this idealized picture. For one, even if Apple buys just HBO or all of Time Warner that ownership doesn’t convey anything like exclusivity. HBO has existing agreements with hundreds of cable and satellite providers around the globe and nothing exclusive can happen until those deals run out or are canceled, which would take years or cost billions in penalties.
Apple might be interested in Time Warner anyway -- and I might urge Cupertino to take the chance -- except there’s a key acquisition benchmark that isn’t being met here, which is sales per employee. Time Warner’s is too low.
Who pays attention to sales per employee, anyway? Well Apple does and always has, and if you look at the acquisitions the company has done, none of them as far as I can tell caused Apple’s overall sales per employee to drop.
Apple’s annual sales per employee stand at about $2.12 million. Time Warner’s is $1.1 million, which is high by most standards (IBM, in comparison, has approximately $250K in sales per employee) but under investor pressure TWI’s $1.1 million is probably as lean as the company can get, meaning it simply isn’t an Apple-like business.
Adding 65,000 TWI employees to the 110,000 folks already working at Apple would inevitably change and hurt the company. At least that’s the thinking on this subject explained to me one day by Steve Jobs, himself, who I guess probably came up with it. Steve told me he wouldn’t buy a company unless it was strategic and matched or exceeded Apple’s labor leverage.
Even Apple’s controversial 2014 Beats acquisition meets the test. Beats had $1.5 billion in sales and 700 employees for an average of $2.14 million in sales per employee. Apple immediately laid-off 200 Beats employees, remember, raising sales per employee at both Beats and its new parent. The layoffs were done in the name of reducing duplication and redundancy, but isn’t that the entire point of this benchmark, making the overall enterprise even more efficient?
So Apple won’t buy Time Warner because doing so would be too disruptive to the acquiring company and Steve Jobs would appear in Tim Cook’s dreams to torment him about it. You know he would.
But this doesn’t mean Apple won’t make a big acquisition this year, just that it has to be strategic and meet the benchmark. I have an idea what such an acquisition might be and that will be my next prediction.
What company do you think Apple will buy in 2016?
This one is simple -- a confluence of anti-hacking paranoia combined with the Internet of Things (IoT), which will lead to any number of really, really bad events in 2016.
Remember how the CIA or the NSA or whatever agency it was hacked a few years ago the Iranian nuclear centrifuges making enriched uranium? The centrifuges updated their software over the Internet, loading doctored code that eventually caused the machines to overspeed and shake themselves to pieces, putting the Iranian nuclear program months or years behind.
Now imagine much the same thing happening to your Internet-connected thermostat, baby monitor, or car. We’ve already seen hacking demonstrations kill cars as they drive down the street. Well there will be lots more where that came from.
I’m sure we’ll see one or more really serious IoT data security breaches with profound negative effects in 2016, destroying property and possibly costing lives. This is unlikely to be the work of script kiddies and more likely to be state-sponsored.
The cyber war has already begun.
This is not to say we should abandon the IoT or the Internet (it’s already too late for either) but that hardening and making more resilient this new networking segment is vitally important, as is finding ways to monitor the IoT and quickly recover from attacks.
One thing is for sure: IoT data security is going to become a huge business over the coming years -- probably bigger than the IoT, itself.
When it comes to predictions it is often easiest just to take some really popular new technology and point out the obvious time it will take to be actually adopted. You could say I’m doing that here with drone deliveries and driverless cars, but I like to think my value-added is explaining why these will take so much longer than some people expect.
Amazon.com has been making a lot of noise about using small helicopter drones to deliver packages. I’m not here to say this is an impossible task or that drones won’t at some point be used for this purpose, but what I am saying is that it won’t happen this year, won’t happen next year, and in any true volume won’t happen even five or 10 years from now.
Here’s an interesting story about the economics of drone delivery. It points out that Amazon loses $2 billion per year on second-day delivery to its Amazon Prime customers. This doesn’t mean Prime isn’t profitable for Amazon, just that actual shipping charges are $2 billion more than what Prime members are imputed to be paying for that part of the service. The story and a paper linked within it also drop facts like the energy cost of electric drone delivery will average $0.10 per package, drone purchase, upkeep and maintenance will be another $0.10 per delivery based on a 10 year drone lifespan and six deliveries per drone per day. And so they say the cost per delivery will be something around $0.20 for up to two kilograms. Eighty percent of Amazon packages are under two kilograms and so drones are declared "economically viable".
Not.
Do you have a drone or did you used to have one? If you no longer have it, did your drone last 10 years? Based on our Christmas drone experiences the past couple years in the Cringely household I’d say our typical drone lifespan is one week. Amazon’s will probably be longer, but 10 years? And when that inevitable crash happens, how will they account for the loss of the drone and its cargo? How much should be added to the delivery cost to cover insurance?
Even more damning is the labor economics involved. Commercial delivery drones will require FAA licensed pilots and Amazon expects those to cost $100,000 per year each, though for some reason they don’t bother to use that number in their cost calculations. If Amazon drone pilots are fully employed and never take a break, based on their economic model’s 20 minute roundtrip time, they’ll be able to fly 6,000 deliveries per year for a per-delivery labor cost of $100,000/6000=$16.67. Remember Amazon’s present Prime deliveries cost around $8 each so to save money the company is proposing a new solution that will cost at least double that amount for labor alone, and that doesn’t include labor to load the drone.
Labor costs aren’t going down. If the eventual plan is to make the drones autonomous nobody at Amazon or the FAA says that’s in the works. Then there’s the very complexity of such an operation at scale, which brings me to this picture of telephone wires in New York City circa 1887.
Imagine every delivery truck and bicycle messenger replaced by something that could easily smash into your head. By the way, these telephone wires disappeared in New York the very next year, destroyed by the Great Blizzard of 1888. After that blizzard the wires were put underground where drones cannot go.
So is Amazon crazy or do they know something about drone delivery that we don’t? After all, Amazon paid $775 million in cash to buy a drone company. My friend John Dreese thinks he has the answer. John is a professional aerodynamicist (you can buy his desktop software here) and a novelist (you can buy his book Red Hope here -- it’s about Mars and very good). John thinks Amazon isn’t going to buy all those drones after all… we are.
I don’t think Amazon will deliver packages via a limited number of super-drones. Instead, if I was a betting man, they will lease/rent/sell drones -to- customers, which will then fly to a centralized Amazon distribution site and pick up the packages to return to the owners house -- and stay at the house.
Or perhaps there will be one drone designated per street/block?
Not only can Amazon charge for the devices themselves, but they can charge a recurring fee for what is going to be almost immediate delivery.
Advantages for Amazon:
1) Selling/leasing the drones to the consumer will be a line of income.
2) Drone repair warranties will be an extra line of income.
3) Instead of a single drone learning how to approach/deliver packages to all houses in an area, each Amazon Prime HomeAir drone will only have to memorize the path between the house and the Amazon distribution site.
4) When a drone fails, it doesn’t knock out an entire zone for delivery.
John is assuming drones will fly autonomously, which we’re so far told they won’t, but in the longer run who knows? In any case I see mass drone delivery being a decade or more away. Before then we’ll have to go through drone registration, which I don’t expect to be bad at all since it looks almost exactly like the "licenses" we used to get for using CB radios. Remember those? The drones of the 1970s.
Now to autonomous cars, which I do believe will eventually be hugely successful. There are just too many advantages to a technology that replicates a horse that knows its way back to the barn. Self-driving cars have many safety advantages and can clearly be more network-efficient than human drivers, but the big gains in that regard won’t happen until most cars -- probably all cars -- are self-driving. That’s when they’ll operate at high speed going down the road only a meter or less apart. This sounds more dangerous than it is because if you are driving down the freeway and the self-driving car in front of you slams on the brakes it won’t have time to decelerate enough to even damage your car, coming-up from behind. Running at the speed limit and only one meter apart highways will be able to accommodate up to six times as many cars and they’ll be going faster, too.
But it only works that way if all the cars are autonomous. Or you are being driven by The Stig.
At normal attrition rates, replacing 90 percent of the auto fleet will take 30 years -- 30 years to realize the full benefit of self-driving cars. I don’t think it will actually take that long, though, because the government will find ways to accelerate this trend to save on infrastructure and pollution. But the best they’ll be able to do is probably doubling the rate of adoption meaning self-driving cars will become an overnight sensation 15 years from now, not in 2016.
It isn’t easy being huge as both Apple and Microsoft are starting to realize. Both companies are incredibly successful and I’m not here to say either is in real danger, but both are suffering major structural challenges that will hurt them in 2016. What’s key for these predictions is how they respond.
I’ll deal with Microsoft first because there the challenges and solutions are both clearer than they are with Apple. I’ve been very impressed with Microsoft CEO Satya Nadella who I think hasn’t saved the company, because it didn’t need saving, but he’s a real improvement over Steve Ballmer. Nadella has done the best he can to get Microsoft in order and reinvigorated, not an easy job. His major remaining challenges involve Windows Phone and Windows 10.
Windows Phone is a failure and Microsoft has had enough hacks at gaining that #1 or #2 spot it needs in the mobile market that it is probably time to give up. We’re seeing movement in this direction as Microsoft increases its support for iOS and, especially, Android. If there’s a prediction here it is that Windows Phone will die in 2016 and Microsoft will remake its mobile effort more along the lines of what IBM has been trying to do in its so-called partnership with Apple. There’s plenty of opportunity helping businesses with mobile applications so that’s where Microsoft will logically head. In the meantime there remains the headache of what to do with Ballmer’s final failed acquisition, Nokia’s phone business. Sell it? Write it off? Spin it off? Beats me.
Then there’s Windows 10, which has been a fantastic success almost entirely because it is a free upgrade. Apple can get away with making OS upgrades free because Apple makes lots of money on the hardware on which that OS runs. But Microsoft is a software company and at some point it probably expects to start charging for Windows 10. My gut says that won’t work well enough to fit Microsoft’s business plan, which means a crisis will ensue.
Microsoft is almost unique in its historic ability to leverage an operating system as a driver of market dominance. The only other company that even comes close in this regard is Red Hat. Some reversion is probably in order here and Windows may cease to be the business driver it has been in the past. I’m not saying Windows is going away, but the trend of Microsoft extracting a greater and greater percentage of the profit from every new PC sale is probably over. There just isn’t that much profit to be shared, for one thing. And for another the entire PC market has probably peaked. Microsoft’s business is shifting, possibly to the cloud. It will be interesting to see how it reacts as this unfolds in 2016.
Now to Apple, a company that appears from the outside to be entering a management crisis. Apple needs to pioneer new market categories because the international expansion that has driven the company’s growth for the past five years is almost complete. But Apple is not like other companies so here’s where we have to ask a fundamental question: does Tim Cook give a damn about Wall Street? I simply don’t know. Every other business writer would say that he does give a damn because, well, that’s what CEOs of huge public companies are supposed to do. But I’m not sure Cook does care. Nor am I even sure he should care. In fact I’m pretty sure he shouldn’t.
If Tim Cook does care about Wall Street then he should step up his game, do something dramatic, and show the world anew that he is the logical successor to Steve Jobs. But you know it wouldn’t surprise me if Cook gave up his CEO job this year, moved to chairman, and let someone else worry about Apple day-to-day.
More likely, though, we’ll see 2016 as a year when Apple responds to this growth challenge not through new products or cutting costs (Apple’s sales-per-employee are so high that cutting costs is close to useless) but through financial engineering. Cook isn’t going to be rushed into anything. And if we ignore products and markets for a moment Apple’s greatest challenge is that $200 billion in cash it holds, mainly offshore. Cook and Apple hope for a tax amnesty of sorts because they don’t want to pay an $80 billion tax bill. President Hillary is unlikely to give such a gift to Apple so some other strategy will probably be required.
Let’s think of this in a different way. Forget that the company is Apple. If any company with that much money stashed overseas went to Goldman Sachs for help what advice would they get? The rocket scientists at Goldman would come up with some complicated scam involving derivative securities and special purpose entities to serve the dual purpose of lowering Apple’s tax exposure while satisfying shareholders through share buybacks and rising dividends. IBM has shown how to play these games, propping-up its stock for years and Apple can easily do the same. Which is to say for Apple 2016 is likely to be a year of such shenanigans while waiting for new products that will actually come in 2017.
First a look at my predictions from one year ago and how they appear in the light of today:
Prediction #1 -- Everyone gets the crap scared out of them by data security problems. Go to the original column (link just above) to read the details of this and all the other 2015 predictions but the gist of it was that 2015 would be terrible for data security and the bad guys would find at least a couple new ways to make money from their hobby. I say I got this one right -- one for one.
Prediction #2 -- Google starts stealing lunch money. The title is 100 percent smart-ass but my point (again, read the details) is that reality was finally intruding on Google and they’d have to find more and better ways to make money. And they did through a variety of Internet taxes as well and cutting whole businesses, reorganizing the company and laying-off thousands of people. I got this one right, too -- two for two.
Prediction #3 -- Google buys Twitter. It didn’t happen, though it still might. I got this one wrong -- two for three.
Prediction #4 -- Amazon finally faces reality but that has no effect on the cloud price war. This one is kind of subtle, the point being that Amazon would have to cut some costs to appease Wall Street but that the cloud price way (and Amazon’s dominance in that area) would continue. I think I got this one right -- three for four.
Prediction #5 -- Immigration reform will finally make it through Congress and the White House and tech workers will be screwed. This certainly hasn’t happened yet and the situation looks marginally less horrible than it was a year ago. A true resolution will depend on who is the next President. I got this one wrong -- three for five.
Prediction #6 -- Yahoo is decimated by activist investors. The fat lady has yet to finish singing but I’ll declare victory on this one anyway -- four for six.
Prediction #7 -- Wearables go terminal. Wearables were big in 2015 but I was way too optimistic about the technical roadmap. Maybe 2016 or -- better still -- 2017. I got this one wrong -- four for seven.
Prediction #8 -- IBM’s further decline. I got this one dead-on -- five for eight.
Prediction #9 -- Where IBM leads, IBM competitors are following. Just look at HP or almost any direct competitor to IBM, especially in services -- six for nine.
Prediction #10 -- Still no HDTV from Apple, but it won’t matter. I was right -- seven for 10.
Seventy percent right is my historic average, which I appear to have maintained. Hopefully I’ll do better in 2016.
Now to my first prediction for 2016 -- the beginning of the end for engineering workstations. These high-end desktop computers used for computer-aided design, gene sequencing, desktop publishing, video editing and similar processor-intensive operations have been one of the few bright spots in a generally declining desktop computer market. HP is number one in the segment followed by Dell and Lenovo and while the segment only represents $25-30 billion in annual sales, for HP and Dell especially it represents some very reliable profits that are, alas, about to start going away, killed by the cloud.
A year ago the cloud (pick a cloud, any cloud) was all CPUs and no GPUs. And since engineering workstations have come to be highly dependent on GPUs, that meant the cloud was no threat. But that’s all changed. Amazon already claims to be able to support three million GPU workstation seats in its cloud and I suspect that next week at CES we’ll see AWS competitors like Microsoft and others announce significant cloud GPU investments for which they’ll want to find customers.
This change is going to happen because it helps the business interests of nearly all parties involved -- workstation operators, software vendors, and cloud service providers. Even the workstation makers can find a way to squint and justify it since they all sell cloud hardware, too.
Say you run a business that uses engineering workstations. These expensive assets are generally used about 40 hours out of a 168 hour week. They depreciate, require maintenance, and generally need to be replaced completely every 2-3 years just to keep up with Moore’s Law. If you do the numbers for acquisition cost, utilization rates, upkeep, software, software upgrades, and ultimate replacement, you’ll find that’s a pretty significant cost of ownership.
Now imagine applying that same cost number to effectively renting a similar workstation in the cloud. You still use the resource 40 hours per week but when you aren’t using it someone else can, which can only push prices lower. Installing a new workstation takes at most minutes and the cost of starting service is very low. Upgrading an existing workstation takes only seconds. Dynamically increasing and decreasing workstation performance as needed becomes possible. Hardware maintenance and a physical hardware inventory pretty much goes away. Most data security becomes the responsibility of the software vendor who is now selling their code as a service. Only intelligent displays will remain -- a new growth area for the very vendors who will no longer be selling us boxes.
This will happen because corporate bean-counters will demand it.
From the software vendor’s perspective moving to software as a service (SAAS) has many benefits. It cuts out middlemen, reduces theft and piracy, allows true rolling upgrades, lowers support costs and raises revenue overall.
Now add-in for all parties the cross-platform capabilities of running your full workstation applications occasionally from a notebook, tablet, or even a mobile phone and you can see how compelling this can be. And remember that when that big compile or rendering job has to be done it’s a simple matter to turn up the dial to 11 and make your workstation into a supercomputer. Yes it costs more to do that but then you are paying by the minute.
Let’s extend this concept a bit further to the only other really robust PC hardware sector -- gaming. There are 15 million engineering workstations but at least 10 times that many gaming PCs and the gamers face precisely the same needs and concerns as corporate workstation owners.
My son Cole just built a 90 percent Windows gaming PC. I refer to it as a 90 percent computer because it probably represents 90 percent of the gaming power of a top-end machine. Cole’s new PC cost about $1800 to build including display while a true top-of the-line gamer would cost maybe $6,000. If Cole’s PC was a piece of business equipment, what would it cost to lease it -- $20-30 per month? If he could get the same performance for, say, $40 per month from a cloud gaming PC, wouldn’t he do it? I asked him and he said “Heck yes!” The benefits are the same -- low startup costs, no maintenance, better data security, easy upgradability, and cross-platform capability.
This makes every gaming PC vendor vulnerable.
If you think this won’t happen or won’t happen soon you are wrong. AWS can right now support 4K displays. Moving the game into the cloud is a leveler, too, as gamers vie not over their puny Internet connections but rather over the memory bus of the server that hosts them. Just as Wall Street robo-traders move closer to the market servers to gain advantage so too will gamers. And the networks will only get better.
This is going to be big.
Readers love predictions so for 15 years or so I’ve been making lots of them during the first full week of each new year. The first time I did a predictions column it was because I couldn’t think of anything else to write about that day and the reaction from readers was so strong that I’ve been stuck doing them ever since. What started as one column per year filled with about 10 predictions has expanded over time to as many as 10 separate predictions columns because as I age I am becoming ever more long-winded. Sorry. It’s reached the point this year where this introductory column won’t even contain predictions, just a guide to the several columns that will follow in the next few days.
They will begin, of course, with a look back at my predictions from a year ago to see how smart or stupid I was. Historically I’ve been about 70 percent smart and 30 percent stupid in my predictions with that number more or less dependent on how vague I can be. Sorry again.
You may have noticed I’ve been away. I was helping to launch Mineserver, my kids’ startup. That painful birth should be complete by the end of this week. Next week -- after I’ve recovered from the predictions debacle -- I’ll write a special column all about what it’s like for a guy who last shipped a technology product in 1986 to do a tech startup with co-founders whose median age is 11. To call it a challenge would be an understatement, but I think we will one day look back fondly on the experience.
My first predictions column will appear later tonight (Monday) and they will all be done by the end of Friday. There’s a vague possibility I’ll also do a column Friday about IBM, which is supposed to start its first 2016 layoffs this Thursday. That column will only appear if I feel there’s something unsaid or not obvious about the situation that needs to be pointed out at that time.
It’s not at all clear that IBM will announce anything on Thursday but I have it on good authority that about 10 percent of Big Blue’s worldwide headcount (about 30,000 people in all) will be let go early in 2016, whether they issue a press release to that effect or not.
Lord knows how they’ll decide who to let go. To do such a large layoff so early in the year means it probably can’t be based on year-end numbers. Maybe they use a ouija board.
I’m told that in the USA, IBM workers were asked to get their performance information (used for promotions and raises) in early for 2015 -- 2 months earlier than normal. That was a big tipoff that something was going to hit the fan in January. To the best of my knowledge IBM hasn’t given any USA workers their ratings for 2015 yet. so this job action is being decided independent of the facts.
The fourth quarter of calendar 2015 was a debacle for IBM and it’s clear from the outside looking-in that the company won’t be shying away from that description. Managers are being told to put employees who haven’t performed on performance improvement plans (PIPs) immediately. A manager on the conference call asked for the definition of "haven’t performed" and it’s strictly based on how much they sold (even for technical sellers, who have a support role!). So you have technical and sales people working their asses off trying to sell stuff that nobody wants to buy, and being rated a PBC 4 and put on a PIP just because the company hasn’t invested in improving their products.
The ship is going down. There are usually panicked calls trying to get something to happen by end of the quarter, but this time there hasn’t been much of that. The sales pipeline is pretty empty of big deals that might have had a chance to change quarterly results. And registration for the big annual Interconnect (former Impact) conference is extremely low (about 15 percent) and it’s next month. They are resorting to giving away passes to save face.
Sometimes public companies like to pack all the bad news together to either just get it out of the way at once or to justify some draconian measure. In this case with IBM I suspect both motivations are there. If the news is big and bad enough Ginni Rometty can use it to justify some significant restructuring beyond just whittling the headcount. I suspect she wants to sell or close entirely Global Services, taking a big earnings hit in the process while leaving many customers turning slowly in the wind. Certainly something has to give: with interest rates rising IBM may not be able to afford much longer borrowing money to buy back stock and shrink the company.
Everyone I know who is still working at IBM is hoping for a separation package, their fear being that the era of packages may be ending or has already ended at IBM. Morale is at rock bottom.
There is a very clear and predictable pattern here. The quarter ends. IBM calculates its numbers. It then recalculates its forward-looking numbers. It sizes and implements staff reductions accordingly. It announces quarterly earnings, updates its forward looking numbers, then announces cost reduction plans and write offs.
Since the whole idea is to save money and the most expensive IBM workers are in the USA, they will be hit extra hard.
It seems oddly fitting that this week -- a week scarred by the bizarre and violent mass murder in San Bernardino -- that I received a LinkedIn invitation to connect with someone who listed this as their job description:
Install, maintain, and repair GPS, Wi-Fi, and security camera systems on tour buses. In 2010, working with grant money from Homeland Security, I installed security systems on a fleet of tour buses and I have been maintaining those systems since then. In 2011, I helped install multi-language listening systems on tour buses and have been the lead maintenance technician. Currently, I am project manager for upgrading a fleet of 50 tour buses with new GPS systems using Homeland Security grant monies. This requires coordinating with engineers of service providers to solve unusual, complex problems.
None of this should surprise me, yet still it does. I didn’t know Homeland Security was listening-in like that, did you?
And since they evidently are listening, shouldn’t they have told this guy to be quiet about it?
There is no presumption of privacy riding in a tour bus, so it probably isn’t illegal to listen-in. Bus security cameras and their footage have been around for years now and appear regularly on TV news after bus crimes. But there’s something about this idea of not only our actions being recorded but also our words that I find disturbing. It’s especially so when we consider the burgeoning Internet of Things (IoT). What other devices will soon be snitching on us?
Innocent people have nothing to fear, we’ll be told, but with smartphones and smartwatches and smarthomes and smart refrigerators and cars with their own continuous LTE links, how far can we be from every one of us carrying a listening device? It happened already in a Batman movie, where Bruce Wayne hacked all the phones in Gotham City, leading to the protest resignation of Morgan Freeman as Batman’s tech guy. If something like this were to happen with the real telephone system, who would be our Morgan Freeman? I’m guessing nobody.
Then there’s Big Data fever. Everyone wants bigger data sets to justify more computers to analyze them. Every data center director since the first one was built has wanted more processors, memory, and storage. If we’re moving to listening in every language the processing power required will be huge -- another boon for the cloud.
One thing that struck me watching on TV the aftermath of the San Bernardino attack was how our first responders have changed. Not only do the police now have tanks, but a third of the big black SUVs parked outside the Inland Developmental Center were identified by the helicopters flying above as belonging to the Department of Homeland Security, those very grant-givers from LinkedIn. It’s a whole new layer of bureaucracy that will probably never go away.
I talked recently to the developer of a phone app that could, by using your smartphone mic as you drive, tell you when a tire is about to fail. In one sense it’s a brilliant application, but if your phone can hear clearly enough the difference between a good tire and a bad one then it can hear a lot of other things, too. I can only imagine what the back seat of my first car -- a 1966 Oldsmobile -- would have heard and reported on if we’d had this capability back in the day.
But the march of technological progress is inevitable so most of this will come to pass. And the fact that we have literal armies of hackers all over the world devoted to cracking networks suggests that once such a listening capability exists for any purpose someone will find a way to exploit it for every purpose.
Then life will become like an episode of Spy versus Spy from the Mad Magazine of our childhood with consumers buying technology to protect themselves. Tell me your kid won’t want that bug-sweeping app or the one that spoofs the insurance company GPS into thinking the car isn’t speeding after all.
And where will that leave us other than paranoid and in constant need of AA batteries? It won’t improve our quality of life, that’s for sure, and I’m willing to bet it won’t save many lives, either.
Soylent Green is the punchline of a bad joke told to me at the breakfast table by Channing, my 13 year-old son, but in a way it is fitting for this column about women executives in danger of being chewed-up by their corporate machines. And kudos to you if you caught the reference to Edward G. Robinson’s final film -- about an over-populated world where people are recycled into cookies.
First up is Yahoo CEO Marissa Mayer, whom I’m told is rapidly losing the support of her hand-picked board. Mayer, who is expecting twins, will probably not be returning from her upcoming maternity leave and Wall Street has begun speculating about possible successors.
Hers would have been a tough gig for anyone. Yahoo is asset-rich while at the same time unable to execute on its re-spackled but essentially unchanged business plan of being a web portal (remember those?). Yahoo’s Alibaba and Yahoo Japan stakes make it a target for activist investors and Mayer’s desire to show she can run an up business in a down industry was probably a fantasy right from the start. Still, she was doing what she thought would work.
Well it didn’t work. Getting the activists off her back by spinning-off Yahoo’s Alibaba stake not only comes with a big tax bill, the board has finally started to realize they were spinning the wrong part of the business. What really has to go is the depreciating asset that has come to be known as Yahoo’s core -- all the parts of the company except Yahoo’s Alibaba and Yahoo Japan holdings.
But don’t expect Yahoo to just flip the plan, keeping Alibaba and spinning-off to investors those parts of Yahoo with a yodel attached. I suspect that ship has sailed. What I think more likely is an outright sale of the legacy business to one or more Private Equity funds. They’ll take private what we all think of as Yahoo, strip it mercilessly of under-performing businesses and luxury assets (bye-bye Katie Couric), load-up with debt that shriveled carcass which they will then sell piecemeal to the only two real players left in the portal business -- Google and Microsoft.
What remains of Yahoo after this process is complete won’t be an Internet company but rather a very wealthy financial entity worth $50 billion or so. In short, a completely different company and one Mayer probably wouldn’t even want to run. And that suggests a completely different type of CEO for the successor company which makes me suspect that the candidates being considered in Wall Street speculation are either short-timers there just to engineer the Private Equity sale or they are the wrong folks for the job.
My candidate to run post-Yahoo is Kleiner Perkins honcho John Doerr. This is just my opinion, mind you, and not based on any insider information or discussions with Doerr, who is probably as surprised as you are to read this. But I think Doerr could use a change of venue following the Ellen Pao mess and this would give him an amazing third act in his Silicon Valley career. Just imagine the impact on Silicon Valley and the technology world of putting a very smart billionaire in his own right who has nothing left to prove in charge of investing $5 billion per year in young companies.
The next female CEO in jeopardy is IBM’s Ginni Rometty. Given all the negative things I’ve written about IBM since 2007 this should be no surprise, but the story is more complex than simple mismanagement. I’ve decided Ginni really believes in what she is doing. It’s still the wrong plan and I think bound to fail, but I now believe Ginni has decided to take even bolder moves in the wrong direction -- moves that should very quickly prove that one of us is right and the other wrong.
I’ve been told another big Resource Action (IBMspeak for layoff) will be starting in January, ultimately costing the company up to 30,000 heads. This will be accompanied by a change in IBM’s 401K plan, either reducing or eliminating completely the company’s matching funds. The cuts will be coming mainly in IBM’s Global Technology Services (GTS) division which is responsible for 60 percent of the company’s revenue. These announcements will be accompanied, no-doubt, by additional cloud investments, the idea being to sell investors on the idea that IBM will make up in its cloud business what it inevitably loses in GTS.
This will represent a significant strategy shift for IBM. For one thing, it’s Big Blue essentially abandoning its largest current business. GTS customers are already unhappy with IBM’s quality of service so what will be the likely effect on those customers of taking 30,000 more bodies from their already lousy service provider?
There are three ways I can see IBM playing these changes in GTS. The BIG PICTURE will continue to be IBM committing to cloud and making necessary changes to go with that, but on the more tactical level IBM can: 1) simply say it is getting out of services over time, customers and employees be damned; 2) recommit to GTS but as a significantly smaller operation offering white glove service to only IBM’s highest-margin customers (in other words abandoning many accounts), or; 3) make the headcount reduction not through a Resource Action but by selling outright some or all of GTS, generating more cash to put into cloud. Expect any GTS buyer to be Asian.
No matter which course she chooses Ginni will piss-off a workforce that already distrusts her, but she’s so far down this path-of-no-retreat that it probably doesn’t concern her. 2016 is the year that will make or break Ginni Rometty as IBM’s CEO and she appears to be okay with that. I think it is going to be a disaster but you have to kind of admire her all-in attitude.
Remember IBM’s CAMSS (cloud, analytics, mobile, social, security) strategy? I think we can expect some strategic streamlining, rolling at least two or three of these letters together. IBM is all-in on cloud so that won’t change and their security business I think has good prospects, but analytics, mobile and social are all at risk because it’s difficult to show financial progress in those areas. IBM hasn’t been especially successful generating new analytics business, their mobile strategy is too dependent on Apple (a total loon in IBM’s view) and IBM still can’t even describe its social business, much less break out financials. Ginni will find a way to spin this change, too, hoping to buy time.
But the bottom line for IBM seems to be a future in cloud and security (and mainframes, as a couple commenters point out below -- that’s not going to change, but neither is it going to get any bigger). To do that with characteristic Armonk gusto, Ginni will use the 401K savings as well as division sales, if any, to bankroll a major cloud expansion, hoping to leapfrog Google and Microsoft and move into the #2 position behind Amazon in that sector. Alas that’s #2 in a business that’s nowhere near as big as what IBM is abandoning, so the company as a whole will continue to shrink. Share buybacks will continue, the dividend will continue to go up, but there will come a point when IBM is a $40 billion cloud and security company. Maybe that will be a good thing, but what it won’t be is a consequential thing. It’s the end of what used to be IBM.
And finally we have one more female executive on the hot seat -- Diane Greene, who was recently hired to run Google’s enterprise cloud business. My friend Om Malik has a good analysis of this move here and I agree with him that it won’t work, though I have a few extra reasons why that Om doesn’t mention.
When a baseball team is performing poorly one of the cheapest moves the owners can make is firing the manager. It shakes-up the squad, hopefully scaring the high-salary players into trying harder. And it almost always works, especially if you hire a new manager who is just as good or better than the old one while -- this is key -- being obviously different in some major way. Well Diane Greene, who comes from VMware, is clearly different from the usual Google executive hire. It’s not that she’s a woman but that she’s coming from a high position in a company with a very different corporate culture than Google’s. "This time we really mean it", is the message that is supposed to be conveyed by this hire.
But it still won’t work. Google’s enterprise cloud business performance has been miserable, Om points out that VMware pretty much missed the cloud boat, and Amazon Web Services has a huge lead while Microsoft has an installed base of loyal developers that Google simply does not. It will take a miracle for Greene to be really successful in her new job and miracles have been few and far between lately at Google.
Here I’ll throw a couple more logs on the fire. Remember IBM appears to be about to bet the company on enterprise cloud computing. Google will have a hard enough time making headway against Amazon and Microsoft, but throw-in a kamikaze Big Blue and it looks even worse. But the biggest problem Diane Greene faces at Google is Google, itself. The company’s heart is just not in enterprise cloud. It never has been.
If you’ve ever pitched a startup idea to a really successful tier one venture capitalist -- some billionaire who sits on the board of at least one iconic company they see themselves as having helped found and made successful -- you’ll notice that they tend to hear your pitch through a filter that emphasizes how the startup will impact their already-successful #1 portfolio star. Your idea can be new and fresh and exciting and truly world-beating, but if you are pitching it to someone who got really, really rich from Yahoo or Google or Facebook, they’ll concentrate first on what this means to Yahoo or Google or Facebook. That’s because their incremental upside in that unicorn is bigger to the VC than the risk-adjusted upside of your little startup is ever likely to be. And that’s exactly the problem Diane Greene has trying to get Google to see its enterprise cloud business as something important to the company.
What’s important to Google is search, ad sales, and YouTube. Cloud, to Google, is an internal service that enables these other revenue generators. Slugging it out with Amazon may sound exciting, but what impact will it have on search, ad sales, or YouTube other than to distract internal developers and put those services at some risk? That’s why Diane Greene will have a very difficult time succeeding at Google, because her very success will be seen internally as a threat to the company she is trying to help.
It’s not a job I would want.
Earlier this year two different research reports came out describing the overall cloud computing market and Amazon’s role in it. Synergy Research Group saw Amazon as by far the biggest player (bigger in fact than the next four companies combined) with about 30 percent market share. But Gartner, taking perhaps a more focused view of just the public cloud, claimed Amazon holds 82 percent of the market with cloud capacity that’s 10 times greater than all the other public cloud providers combined. I wonder how these disparate views can be possible describing the same company? And I wonder, further, whether this means Amazon actually has a cloud monopoly?
Yup, it’s a monopoly.
Amazon has monopoly power over the public cloud because it clearly sets the price (ever downward) and has the capacity to enforce that price. Amazon is the OPEC of cloud computing and both studies actually show that because both show Amazon gaining share in a market that is simply exploding.
The way you gain share in an exploding market is by exploding more than all the other guys and we can see that at work by comparing IBM’s statement that it would (notice it is speaking about future events) invest $1 billion in cloud infrastructure in the current fiscal year, versus Amazon’s statement that it had (notice is is speaking of events that had already happened) spent $5 billion on cloud infrastructure in the past fiscal year.
Maybe $1 billion against definitely $5 billion isn’t even a contest. At this rate Amazon’s cloud will continue to grow faster than IBM’s cloud.
Wait, there’s more! Only Amazon can really claim it has a graphical cloud. While not all Amazon servers are equipped with GPUs, enough of them are to support millions of simultaneous seats running graphical apps. No other cloud vendor can claim that.
Having a graphical cloud is important because it is one of those computing milestones we see come along every decade or so to determine who are the real leaders. Think about it. There were mainframes with punched cards (batch systems) then with terminals (interactive systems), then interactive minicomputers, then personal workstations and computers, then graphical computers, mobile computers, networked computers, Internet computers and now cloud computers. Each step established a new hierarchy of vendors and service providers. And it is clear to me that right here, right now Amazon is absolutely dominant in both cloud and graphical cloud computing. It set the price, it set the terms, it has the capacity, and everyone else just plays along or goes out of business.
And that sounds like a monopoly, which is illegal, right?
Not really. Apple at one time had 70+ percent PC market share and nobody talked then about Cupertino’s personal computing monopoly. That’s because first movers always have huge shares of what are, at the time, really tiny markets. And right now cloud computing is tiny enough in absolute dollars and as a percentage of any vendor’s total sales that no company is in a position to threaten the existence of another strictly through cloud pricing policies.
If Amazon’s cloud success led to IBM getting out of the cloud business it wouldn’t be IBM going out of business, just Armonk turning its capital toward some other, more lucrative, purpose.
But there is an important question here and that’s at what point Amazon will be in a position to use lethal cloud force? It’s a market doubling or more in size every year. How many more doubles will it take for Amazon to gain such lethal business power? I’d say five more years will do it.
And when I say do it, think about the company we are talking about. Amazon is unique. No large company in the industry right now has a more effective CEO than Jeff Bezos. No large company has a bigger appetite for calculated risk than does Amazon. No company is more disciplined. And -- most importantly -- no large company has the ear of Wall Street the way Bezos and Amazon do. They can try and fail in any number of areas (mobile phones, anyone?) and not be punished for it in the market. And in this case that’s because the market is smart, relying on Bezos’ innate ruthlessness.
If Amazon reaches a true monopoly at scale it’ll do all it can to make no room at all for competitors.
And that’s why we should all probably root this time for the other guys, just to keep Amazon somewhat in check. In the coming months those possible competitors will identify themselves through significant graphical cloud investments. Microsoft will certainly be there. IBM might be there. Apple could always decide at any time to play in this new sandbox but I’m not sure it really understands the game and Amazon might already be untouchable.
What do you think?
Photo credit: LilKar/Shutterstock
A reader pointed out to me today that Yahoo, minus its Alibaba and Yahoo Japan stakes plus cash, is now worth less than nothing according to Wall Street. This says a lot about Yahoo but even more about Wall Street, since the core company is still profitable if in decline. If I were a trader (I’m not) that would argue Yahoo is a buy since there’s likely to be a future point at which the company will be free of those other riches and even Wall Street will be forced to give the carcass a positive value.
But when I heard about the negative value story the first thing that came to mind was something my old friend Joe Adler said long ago about one of my startups. "Your company is starting to have a stench of death about it", Joe said. And Joe was right.
Yahoo may be profitable, may have billions in investments, but still the company has about it a stench of death. This less-than-tangible corporate characteristic is dragging the company under and there is only one way to deal with it: change everything.
Three years into her tenure as Yahoo CEO, Marissa Mayer has tried a lot of things to improve her company’s prospects but she hasn’t changed everything. Her current path seems to be spinning-off the Alibaba stake to get activist investors off her back, then spinning-off Yahoo Japan when those activist investors realize they can suck the same blood a second time, then finally Ms. Mayer will be in command, 2-3 years from now, of a slimmer, trimmer, and I-guarantee-you totally worthless Yahoo. That’s because you can’t use an old turnaround playbook for what’s essentially a new market.
Yahoo can’t go back to its former greatness because that greatness is no longer available to be got.
Think about that statement because it’s as true for Google and Microsoft as it is for Yahoo. PC search has peaked, PC operating systems and applications have peaked. It’s just that what Yahoo does -- functioning as a web portal -- peaked around 2001.
It’s popular to say that one definition (some people say the definition) of insanity is doing the same thing over and over expecting a different result. I’ve put some legwork into tracking-down that quote and found it isn’t true. But when it comes to Yahoo or any Yahoo-like company trying permutations on half a dozen restructuring techniques, I’d say there must be some insanity involved because they are changing all the wrong things.
Its corporate objective is unachievable not because Yahoo isn’t smart enough but because itsintended destination no longer even exists.
So what’s to be done? Well it’s pretty darned obvious to me that Yahoo’s current path is the wrong one. It is, in fact, absolutely the wrong way to go. Yahoo would be far smarter to do the exact opposite of everything it has recently proposed.
Ask Warren Buffet how to get rich and he’ll eventually talk about preservation of capital. If you have a ton of money or assets sitting on your corporate books about the worst thing to do is to give them away, yet that’s exactly what Yahoo will be doing by spinning-off to shareholders its Alibaba stake.
The whole point of the Alibaba spin-off is to placate investors, buying Mayer extra time to fail.
If the present goal is unreachable, giving away two thirds of the company to buy more time to reach it makes no sense.
Now let’s pause for a moment and consider two famous tech turnarounds -- Apple and IBM. Each of these companies had an amazing turnaround beginning in the 1990s that will be held up as examples of what Yahoo could do, too. Except Yahoo isn’t IBM circa 1993 or Apple circa 1997. Not even close.
Both of those companies were actually in worse shape than Yahoo is today, but its markets weren’t. Apple and IBM had to recover from horrible mismanagement while Yahoo has never lost a cent. Yahoo’s problem is the world is less and less wanting the solutions it offers. And Yahoo’s solution to this, which has been to nudge a little in one direction or another, simply isn’t bold enough.
Preserve the capital, not the business.
Yahoo shouldn’t be spinning-off anything. Mayer should be selling every operating business she can while it still has trade value. Search, advertising, content, the various portals like Yahoo Finance -- these all should be for sale, primarily to Google and Microsoft.
Let them fight for economies of scale in an industry that has already peaked.
When Marissa Mayer came to Yahoo three years ago she had a mandate for change but she didn’t change enough. Conventional wisdom says that mandate has expired and can’t be renewed. In this case conventional wisdom is wrong. Ms. Mayer has a chance to renew her mandate if she proposes changes that are bold enough to shock and awe Yahoo shareholders.
Sell everything that moves, turning it into dry powder for the real battle that lies ahead. Then explain to shareholders the precise strategy through which the New Yahoo will turn $50 billion into $500 billion over the next decade. That will be enough to renew Mayer’s mandate.
But only if she’s smart enough to do it.
And what’s this amazing strategy? I explained it all right here 13 months ago. Here’s that entire column from September 2014 with not a single word updated or changed. It was true then and it’s still true today, though time is quickly running out.
Alibaba’s IPO has come and gone and with it Yahoo has lost the role of Alibaba proxy and its shares have begun to slide. Yahoo’s Wall Street honeymoon, if there ever was one, is over, leaving the company trying almost anything it can to avoid sliding into oblivion. Having covered Yahoo continuously since its founding 20 years ago it is clear Y! has little chance of managing its way out of this latest of many crises despite all the associated cash. But -- if it will -- Yahoo could invest its way to even greater success.
Yahoo CEO Marissa Mayer, thinking like Type A CEOs nearly always seem to think, wants to take some of the billions reaped from the Alibaba IPO and dramatically remake her company to compete again with Google , Microsoft , Facebook, and even Apple.
It won’t work.
Those ships have, for the most part, already sailed and can never be caught. Yahoo would have to do what it has been trying to do ever since Tim Koogle left as CEO in 2003 and regain its mojo. There is no reason to believe that more money is the answer.
It’s not that Mayer isn’t super-smart, it’s that the job she is attempting to do may be impossible. She has the temperament for it but the rest of Yahoo does not. Even if she fires everyone, Yahoo still has a funny smell.
In practical terms there are only two logical courses of action for Mayer and Yahoo. One is to wind things down and return Yahoo’s value to shareholders in the most efficient fashion, selling divisions, buying back shares, and issuing dividends until finally turning out the lights and going home. That’s an end-game. The only other possible course for Yahoo, in my view, is to turn the company into a Silicon Valley version of Berkshire Hathaway. That’s what I strongly propose.
Mayer seems to be trying to buy her way ahead of the next technology wave, but having been at this game for a couple of years so far, it isn’t going well. Lots of acqui-hires (buying tech companies for their people) and big acquisitions like Tumblr have not significantly changed the company’s downward trajectory. That’s because that trajectory is determined more by Google and Facebook and by changes in the ad market than by anything Yahoo can do. It’s simply beyond Mayer’s power because no matter how much money she has, Google and Facebook will always have more.
It’s time to try something new.
While Berkshire Hathaway owns some companies outright like Burlington Northern-Santa Fe railroad and GEICO, even those are for the most part left in the hands of managers who came with the businesses. At Coke and IBM, too, Berkshire tends to trust current management while keeping a close eye on the numbers. Yahoo should do the same but limit itself to the tech market or maybe just to Silicon Valley, keeping all investments within 50 miles of Yahoo Intergalactic HQ in Sunnyvale.
Yahoo’s current stakes in Alibaba and Yahoo Japan are worth $36 billion and $8 billion respectively and Alibaba at least appears to be on an upward trajectory. With $9 billion in cash from the Alibaba IPO Yahoo has at least $50 billion to put to work without borrowing anything. $50 billion is bigger than the biggest venture, private equity or hedge fund.
Mayer is smart, but maybe not smart enough to realize the companies in which she is interested could do better under their own names with a substantial Yahoo minority investment. That would leverage Yahoo’s money and allow a broader array of bets as a hedge, too. Mayer can pick the companies herself or -- even better -- just participate in every Silicon Valley B Round from now on, doing a form of dollar cost averaging that puts $15 billion to work every year. With future exits coming from acquisitions and IPOs (and possibly winding-down its own tech activities) Yahoo ought to be able to fund this level of investment indefinitely. Yahoo would literally own the future of tech.
Silicon Valley companies that make it to a B Round (the third round of funding after seed and A) have dramatically better chances of making successful exits. Yahoo wouldn’t have to pick the companies, Hell they wouldn’t even have to know the names of those companies, just their industry sectors and locations. Forty years of VC history show that with such a strategy investment success would be practically guaranteed.
As opposed to the company’s current course, which is anything but.
In my last column I wrote that Dell buying EMC is a great idea (for Dell) and left it to this column to more fully explain why that is so. It takes two columns because there is so much going on here in terms of both business models and technologies. As the title suggests it comes down to Michael Dell against the world and in this case I predict Dell will win, Cisco, HP and IBM will lose, Apple will be relatively unaffected and I don’t really know what it will mean for Microsoft but I think the advantage still lies with Dell.
One thing that is key is every one of these companies except Dell is publicly traded and answerable to Wall Street while Dell is for now answerable only to the gods of Texas bidness who must at this point be giddy with greed. So all of these companies except Dell have essentially the same playbook -- cutting costs, laying-off workers and outsourcing like crazy all to pay for the dividends and stock buybacks Wall Street defines these days as prudent corporate behavior. In contrast to this defensive game Dell can use its free cash flow to transform the company and dominate the market -- what 20 years ago we would have thought of as the right way to build a company. How quaint.
In order to become dominant Dell has to build or buy companies in these areas -- network, storage, compute, virtualization, security, and integration consulting. Notice that last item because it means Dell will also challenge integrators like CSC, SAIC, HP-EDS, IBM, etc. Why buy integration from anyone other than the equipment provider? If Dell has compute, storage, network, and virtualization why look at HP or IBM for any of those parts?
Buying EMC is just the beginning of this battle for Dell.
Many readers disagree with me about this as you can see in the comments from my last column. But here it is important to differentiate between business today and business in the future. My critics are too often living in the future, which is to say seeing Sand Hill Road startup trends as what matters when all they really are at this point are trends, not sales.
"VMware is an afterthought in most cloud discussions", said one reader. "They owned the first wave but have been almost non-existent in the second container wave. This is classic innovator’s dilemma stuff. Their cash cow is large boring old companies running their own data centers. They’ve had zero incentives to innovate towards the faster-booting/application-centric/devops-friendly model etc. that the Herokus/AWSs/Googles/Dockers etc. of the world have been running towards the past 4-5 years. VMWare has been waking up the past year or two but it’s almost certainly too late. I have a friend, one of the early engineers at VMWare, that quit in disgust 5-6 years ago as he saw the writing on the wall and management wouldn’t budge. Cost isn’t the problem. Their tech just doesn’t matter much anymore".
I’m not saying this reader is completely wrong, but I’m not sure it really matters.
Let me explain, VMWare has about 85 percent of the corporate hypervisor market. Microsoft’s HyperV has been making some inroads. There’s a little Citrix (hypervisor) out there. And Citrix XEN is the big player in the cloud world right now. Yes, there’s a lot of interest in Docker but from a corporate point of view it is not quite ready from prime time. Stop talking to VCs and startup engineers and start talking to IT managers and you’ll discover they all use a lot of third party software that doesn’t support Docker. And the big vendors aren’t helping Docker, either. Oracle, for example, has bought a lot of products (eg WebLogic) and is putting very little R&D into them, so Oracle’s adoption of Docker has been painfully slow. So even if my reader is correct, before businesses can move to Docker they’ll need comparable features in Docker that are in VMWare and support by the third party software firms like Oracle -- support that so far doesn’t exist.
The frustration of that VMWare engineer who quit is common in the industry. I’m not even going to blame Wall Street for this one. Engineers -- especially early software engineers in startups -- love to get disgusted and quit. It’s what they do to make way for the next wave of engineers.
And in contrast to his attitude of disgust I have to give EMC credit for when they bought VMWare they allowed it to continue to develop and invest in the product. If you look at VMWare today, five years ago and 10 years ago, it has shown pretty impressive progress. Some of their features are still unique in the market. EMC didn’t gut VMWare as IBM or CA would have done.
One of the big challenges in Cloud is to keep corporate data and processing separated and to be able to replicate the complex network and storage for applications. A typical big company could have hundreds of VLAN’s on their network. If you are a cloud provider with hundreds of big company customers, that’s tens of thousands of VLANs that you absolutely, positively must keep isolated. While Docker is really exciting, this scenario also also requires software defined networking and software defined storage. VMWare has very good technology in this area. They understand things the others are just beginning to figure out. This is why Dell is buying EMC.
But buying EMC is just the first step for Dell if it is going to change corporate IT through full infrastructure integration.
Apple is now a consumer IT shop so it probably won’t be affected by any of this. Dell is going to be the exclusive Small and Medium Business, (SMB) and Enterprise IT shop. This brings us back to compute, storage, network, cloud, security and operations/integration which will be Dell’s response to IBM’s CAMSS (cloud, analytics, mobile, social, and security) with the major difference between the two being that Dell’s focus is real and IBM’s is not.
So here’s what Dell is going to do (or has already done) in each of these areas:
Here’s what I think will eventually happen to a variety of big vendors as a result of this new Dell strategy: first HP and IBM will go down in flames.
Cisco, Juniper, Brocade, and Citrix will all be affected. Cisco should respond by buying NetApp. Remember you read it here first.
But VMware has already won the Enterprise so even if Cisco buys NetApp they will still run VMware and still need RSA.
Microsoft plans in its new server OS releases to break apart server software applications into two parts -- HyperV, (virtualization engine) and the OS/application, (like Active Directory). So if you want all of the features of Active Directory you will need to delete VMware and go with Microsoft HyperV to get all of the functionality of Active Directory. Or you could stay with VMware, but then your Active Directory software will not have all of the bells and whistles.
This strategy was originally developed in the early 1990s when Microsoft came up with Office to go after Harvard Graphics, WordPerfect, Lotus 123 and dBase. The difference this time is that this breakup of the Operating System is not going to fly. The battle will be in the courts globally and Microsoft will lose. Office was better, HyperV is not. Every enterprise in the world will have to drop billions in VMware development and experience for HyperV so they all will go to court to stop Microsoft.
One reader pointed out that Microsoft helped Dell go private with a $2 billion loan. In the context of a $100+ billion enterprise I don’t think this is enough to immunize Redmond from the future.
Infrastructure Providers: CSC, Booz Allen, HP, IBM, Harris, CACI, Deloitte, et al.-- these guys are all sliding down the hill anyway. As IaaS grows the Dell deal just speeds up the process.
WAN providers, AT&T, Verizon, CenturyLink, Comcast, Time-Warner: This is as big for them as IT hardware companies. Virtualization needs two things speed and Layer-2. Currently Enterprise clients use MPLS-VPN from the telephone companies. Dell is going to tell the clients to move to Carrier Ethernet.
Cisco has pointed out to the telephone companies that Carrier Ethernet from 2.5MbE to 10GbE operational costs are 75 percent less than MPLS. With Comcast and the cable industry moving into the Enterprise space we will see the Carrier Ethernet explosion take place. 10GbE to Dell VMware Cloud equal to 10GbE between your data centers. Edge at 100MbE to 1GbE. All virtualized. The telephone companies are going to be stuck in the MPLS-VPN marginal cost structure and stay with more expensive, slower Layer-3 topologies.
Oh, and there’s one more thing -- mobile. Dell has had a sorry history in handhelds and phones but with the EMC deal (combined with all these other details) there’s one more element that nobody has yet mentioned -- AirWatch, which comes with EMC. In case you don’t know AirWatch, they are one of the top two players in the mobile sector for wrapping. Every financial firm in the world uses mobility management software on their phones to control the app environment as well as meet regulatory requirements. Dell now has a seat at that table, too.
I’m not saying it’s game over but corporate IT is about to radically change and Dell will be the big winner.
The Wall $treet Journal carried a story last week about Dell Computer possibly buying EMC, the big storage vendor, and this morning Dell confirmed it, pinning a price of $67 billion on the deal. There’s a lot to wonder about in this combination, which I think is pretty brilliant on Dell’s part even if I’m not generally in favor of mega-mergers. But it seems to me most of the experts commenting on the deal have it ass-backwards as Wall Street once again proves it doesn’t really understand technology business.
EMC has this large but aging storage division and a valuable subsidiary in VMware, of which EMC owns 80 percent. Activist investors have been rumbling that EMC should spin-off VMware to EMC shareholders because that’s the best way to realize the value of the asset and share it tax-free. Michael Dell appears as something of a white knight except he is expected, too, to get rid of VMware to finance the deal. The only thing wrong with this picture is that all the people who want to spin-off or sell VMware don’t seem to realize that’s where the value of this EMC deal lies for Dell.
It’s as simple as this: if Dell owns VMWare everyone else who makes servers for VMWare will be contributing to Dell’s profits. If Dell chooses to get into the cloud, by owning VMWare it could become a powerhouse in the industry with a price advantage no one else can match. Most of the corporate world is on VMWare and that market needs a VMWare based cloud. It doesn’t exist today because of the software cost of VMWare. Dell could fix that…. but only for itself.
And EMC is no slouch. It is a big maker of storage and is making inroads against IBM. It has a disk based backup (Data Domain) and IBM doesn’t. It has a great NAS product (Isilon) and IBM ended its partnership with NetApp. EMC will put Dell in many new corporations which could help its PC and server businesses, too.
The big loser in this is HP. IBM will be hurt, too, especially in the cloud and storage businesses.
The key differential between Dell and those other two companies is Dell is privately held and therefore immune to activist investors or -- for that matter -- any investors at all who aren’t Silver Lake Partners or Michael Dell, himself.
IBM’s storage products have been really constrained by the company’s now derailed quest for $20 EPS. IBM has neglected this business for years and it is hurting sales. EMC has under-invested in its business, too, because of Wall Street pressures. But Dell being privately owned changes that. This deal could be very good for EMC’s products and very bad for IBM’s.
Ironically, both IBM and HP could have bought EMC but walked away. Big mistakes.
Whoever owns VMWare next could control and own the future of the cloud.
The way I am writing suggests the deal isn’t sealed and something could still happen to take EMC away from Dell. That would be smart. Several companies would be wise to try grabbing EMC away from Dell but I don’t think that will happen.
What if a Chinese or Indian firm stepped in and bought EMC? The USA has already pretty much screwed up its IT industry. If ownership of EMC left the USA that could be the tipping point where the USA loses its position in the technology part of IT. It would be like the electronics industry in the 1960’s when everything went to Asia.
It is interesting to hear all the financial press chatter on this story. They’re still following the default thinking -- buy EMC, spin off and sell VMWare to make the acquisition better financially. They’re even discussing how to do it with the least tax implications. What they are not thinking about is the possibility of a bidding war for VMWare. It is equally likely someone would buy the whole company and spin off EMC.
Next column: Now what will Dell do with EMC?
A longtime reader and good friend of mine sent me a link this week to a CNBC story about the loss of fingerprint records in the Office of Personnel Management hack I have written about before. It’s just one more nail in the coffin of a doltish bureaucracy that -- you know I’m speaking the truth here -- will probably result in those doltish bureaucrats getting even more power, even more data, and ultimately losing those data, too.
So the story says they lost the fingerprint records of 5.6 million people! Game over.
Remember how this story unfolded? There had been a hack and some records were compromised. Then there had been a hack and hundreds of thousands of records were compromised. Rinse-repeat almost ad infinitum until now we know that 5.6 million fingerprint records were lost.
I think it is safe to assume at this point that a massive amount of records held by the Office of Personnel Management have been accessed and copied by the bad guys. It went undetected for months, they had high-bandwidth access, so whatever secrets there were in those records, background checks, security clearances, etc., are now probably for sale.
Or are they? It turns out there are far worse things that could be done with the records -- all the records, not just fingerprints -- than simply selling or even ransoming them. So I sat around with my buddies and we wondered aloud what this could all mean? We’re folks who have been in technology forever and we’re not stupid, but we aren’t running the NSA, either, so take what I am about to write here as pure speculation.
"The only way I can imagine it hurting someone is if false criminal records were created using them," said one friend.
Shit, I hadn’t thought of that! We get so caught-up in the ideas of stealing/revealing, stealing/selling, and stealing/ransoming that I, for one, hadn’t considered the more insidious idea that records could be tampered with or new ones created. Turn a few thousand good guys into bad guys in the records, create a few thousand more people who don’t actually exist, and that system will become useless.
"It’s pretty grim", said another friend. "Worst case it takes fingerprints out of the security toolbox. If you had 5+ million fingerprints on file how could that help you be a bad guy? Or what if the bad guys have ALREADY COMPROMISED THE FINGERPRINT DATABASE? What if they replaced all 5 million fingerprints with one? That was certainly within their capability to do and you know the Feds wouldn’t tell us if they had. If I was a bad guy I would steal the database, corrupt what was left behind, then hold the real fingerprint records for ransom. $100 each? In Bitcoins?".
That guy has real criminal potential, I’d say, but he’s right that we’ll never really know.
"It was my impression the way computers read and store fingerprint signatures is different than they way they’re optically used and searched", explained another friend. "In theory you couldn’t reproduce a fingerprint from its electronic signature. But the bad guys may have optical copies of people’s fingerprints, and one could probably do more with them. At least with the pay services they could control and secure in software where they read a fingerprint. I think there will be ways like this to keep the theft from messing up the electronic payment systems. I hope".
And all this was prelude to Thursday’s arrival of Chinese President Xi Jinping specifically for cyber security talks. Beltway pundits say we need to pressure China to stop the cyber attacks. We need to put more leverage and more pressure on China. Yeah, right. That will never work. Even if China went 100 percent clean there are probably 20 other countries doing the same thing.
My guess, with the Chinese President here and cybersecurity talks on the table, is that we’ve co-created a new, entirely Big Data edition of the old Cold War Mutually Assured Destruction (MAD). They have all of our data but we have all of theirs, too. Either everything is now useless on both sides or we find a way to live with it and the spies all get to keep their jobs, after all. This job keeping aspect is key -- cops need criminals.
If we find a way to live with records loss in this manner it also means both China and the USA are now madly stealing the records of every other country. It’s a data arms race.
Now here’s the scary part, at least for me. Who are the runners in this data arms race? Certainly the G8 powers can all compete if they choose to, but then so does impoverished North Korea (remember the SONY hack?). Since this comes down to a combination of brain power and computing power, it doesn’t really require being a state to play in the game. A big tech company could do it. Heck, a really clever individual with a high credit limit on his AWS account could do it, right?
They probably have already.
So what does this mean, readers, for the future of our society? Is it good news or bad? I simply don’t know.
Alex Gibney’s Steve Jobs documentary is available now in some theaters, on Amazon Instant Video and, ironically, on iTunes. It’s a film that purports to figure out what made Steve Jobs tick. And it does a lot, just not that.
I’m not a dispassionate reviewer here. More than a year before Jobs died I tried to hire Alex Gibney to make a Steve Jobs film with me. At that point he suggested I be the director, that he’d coach me ("It’s not that hard", the Oscar-winner claimed.) We talked and met but didn’t come to a deal. Later Gibney decided to do a Jobs film on his own -- this film -- and he came to me for help. We talked and met but again didn’t come to a deal. Nothing is unusual about any of this, but it made me eager to see what kind of movie he would make and how it would compare to the one I originally had in mind.
Now some of you may recall that I did a Steve Jobs film -- The Lost Interview -- also released by Magnolia Pictures, the company showing Gibney’s movie. But my Jobs film was an accident, a stroke of good fortune, a documentary shot in 69 minutes and brought to the screen for under $25,000 including digital restoration and publicity. Gibney’s movie cost $2 million to make.
And the money shows on the screen. Gibney is a very skilled documentary director surrounded by a staff of the best professionals in the business. The film is beautifully shot and the audio is spectacular, too. Even where the audio is bad it is deliberately bad -- for effect. You can hear Gibney asking questions from off camera. You can hear me asking questions, too, because about two minutes of the film were taken from my film, for which my partners and I were paid.
There aren’t very many interviews in the film but the ones he has are good, especially Chrisann Brennan (Lisa’s mother), a very sweet Dan Kottke, and hardware engineer Bob Belleville. All the interviews are excellent but those three stood out for me.
In a documentary film the thing you want most to get and hardly ever do is a moment of true emotion and Bob Belleville’s crying while talking about the passing of this man who he also says ruined his life, well that’s one of those moments. I wish the film had ended right there, around 45 minutes in.
But it didn’t end there.
The last 40 minutes or so are a succession of negative items that are all true -- backdated stock options, Foxconn employee suicides, corporate tax avoidance, Apple bullying the press, and the ingenuous way Apple treated the news of Steve’s health -- the health of the CEO of a major public company. All these events involved Steve and represented aspects of his personality, but they felt to me while watching the movie like two influences were in effect: 1) the need to get in as much material as possible (this would be, after all, Gibney’s only-ever film on the topic), and; 2) it was a CNN Films co-production and therefore had to have some element of journalism, not just be a tone poem to narcissism.
So the film is 20 minutes too long. And by the time you get to the end and swing back to the central idea that Gibney is personally trying to figure out Steve Jobs (Gibney is the film’s narrator, not just the guy asking questions from off camera) he doesn’t really come to anything like a conclusion.
This is funny given our earlier discussions back in New York about the Walter Isaacson authorized biography of Jobs that we had both thought was kind of a snow job. At the end of that book Isaacson had pretty much thrown up his hands saying that Steve was "complicated" and therefore beyond understanding.
Steve certainly was complicated, but I expected more of a conclusion from Gibney, a sense of really coming to terms with Steve.
Ultimately Steve Jobs wasn’t the man in the machine, he was the machine. And the mourning for Steve that so confused Gibney, because he saw Jobs as a very unlovable character, was mourning for a passing age as much as a man. After that the iPhone became a phone, Apple became a company, and technology pretty much lost any pretense of character.
And Captain Hook was dead.
I’ve been quiet lately, I know. My sons’ Kickstarter campaign has taken a toll on their Venture Capitalist… me. I never before appreciated the physical effort that goes into managing what is, for me, a significant investment. They do the work but I pay for a lot of it and that brings with it the need to oversee -- something I’ve never been very good at doing. You’ll see the result, hopefully, next week.
While I’ve been so preoccupied a lot has happened in the technology world. Apple introduced a slew of new products and Alex Gibney released his Steve Jobs documentary. I’ll comment on both of these shortly. Yahoo was denied its tax-free Alibaba spinoff and so has to go to Plan B. I have such a Plan B (or C or D) for Yahoo, myself and will explain it soon. There are some new technologies you ought to know about, too. There are always those.
But this week was also the anniversary of 9/11 and the most noteworthy thing about it was that the only people who mentioned the attack to me weren’t American, and for them it seemed to be oddly nostalgic.
I feel no nostalgia for 9/11.
But that doesn’t mean we should forget what happened or how we as a nation have handled events since. So with that in mind I’ll point you to my original 9/11 column published two days after the attack, 14 years ago. I think it stands up pretty well after all this time.
Back then the column made many readers angry, but then I’m good at that. It also made them think. Read it again, please, if you have the time.
Google this week introduced its first Wi-Fi router and my initial reaction was "Why?" Wi-Fi access points and home routers tend to be low-margin commodity products that could only hurt financial results for the search giant. What made it worth the pain on Wall Street, then, for Google to introduce this gizmo? And then I realized it is Google’s best hope to save the Internet… and itself.
Wi-Fi is everywhere and it generally sucks. Wi-Fi has become the go-to method of networking homes and even businesses. I remember product introductions in New York back in the 80s and 90s when we were told over and over again that it cost $100 per foot to pull Ethernet cable in Manhattan (a price that was always blamed on the local electricians union by-the-way). Well the lesson must have stuck, because more and more Ethernet is for data centers and Wi-Fi is for everything else. Even my old friend, Ethernet inventor Bob Metcalfe, has started claiming that Wi-Fi is Ethernet, which it isn’t and he knows that.
The contrast between these technologies is stark. My three sons have been developing a computer hardware product they’ll be throwing up on KickStarter in a couple weeks and the lab they’ve built next to the foosball and pool tables in their man-cave (that’s what they call it) is entirely hard-wired with gig-Ethernet and what a joy that is. Lights flash and things happen exactly the way they are supposed to while the rest of the house is wireless and in constant networking turmoil. We have an 802.11ac network with no parts more than a year old yet still the access point in Mama’s room (one of five) loses its mind at least twice a day.
Wi-Fi is a miracle and it’s not going away, but what we have today is generally a hodgepodge of technologies and vendors that kinda-sorta work together some of the time. Every Wi-Fi vendor claims interoperability while at the same time making the point that if you buy your equipment only from them and stick with the latest version (replace everything annually) it will work a lot better. A single old 802.11b device, we’re told, can bring much of the network back to 1999 speeds.
The problem with Wi-Fi, as I understand it, is that 802.11 is a LAN technology developed with little thought to its WAN implications. How many of us run local servers? So nearly everything on a Wi-Fi network has to do with reaching-out 30 or so hops across a TCP/IP network that wasn’t even a factor when Wi-Fi was being developed 20 years ago by electrical (rather than networking) engineers. As a result we have queuing and timing and buffering problems in Wi-Fi that make bufferbloat look simple. These problems exist right down to the chip level where the people who actually know how to fix them generally have no access.
So what does this have to do with Google introducing a Wi-Fi router? Well Google’s continued success relies on the Internet actually functioning all the way out to that mobile device or (shudder) xBox in your son’s bedroom. xBox, if you didn’t know, is a particularly heinous networking device, especially over Wi-Fi. If Wi-Fi is the future of the Internet then Google’s future success is dependent on making Wi-Fi work better, hence the router, which I expect will become something of a reference design for other vendors to copy.
Google’s $199 OnHub, which you can order now, does a lot of things right. It supports every Wi-Fi variant, has 13 antennas, and switches seamlessly between 2.4-GHz and 5-GHz operation on constantly varying channels trying to get the best signal to the devices that need it. This is all from the wireless LAN best-practices playbook and so of course Google says that an all-Google Wi-Fi network is the best way to go. I’m guessing my ramshackle home network will require three of the things and wonder how all that shaking and baking will function on a multi-access point environment?
But those 13 antennas and the 1.4-GHz Qualcomm processor don’t inherently address the problems Wi-Fi brings to the Internet. That’s where OnHub is potentially even more radical, because it’s the first such device that’s likely to be managed by the vendor and not by you. One huge problem with Wi-Fi is the firmware in these devices is difficult to upgrade and impossible to upgrade remotely, but OnHub promises to change that with a continuous stream of tweaks to its GenToo brain straight from the Googleplex. If we forget privacy considerations for a moment this is a brilliant approach because it makes each Wi-Fi network a dynamic thing capable of being optimized beyond anything imagined to date.
I know, having looked deep into the soul of my own Wi-Fi network, that there’s the potential to increase real networking performance (measured not just by bitrate, but by a combination of bitrate and latency) by at least 10X, but to make that happen requires constant tuning and updates.
Whether Google is the best outfit to trust with that tuning and those updates is another story.
Starting in 1977 I bought a new personal computer every three years. This changed after 2010 when I was 33 years and eleven computers into the trend. That’s when I bought my current machine, a mid-2010 13-inch MacBook Pro. Five years later I have no immediate plans to replace the MacBook Pro and I think that goes a long way to explain why the PC industry is having sales problems.
My rationale for changing computers over the years came down to Moore’s Law. I theorized that if computer performance was going to double every 18 months, I couldn’t afford to be more than one generation behind the state-of-the-art if I wanted to be taken seriously writing about this stuff. That meant buying a new PC every three years. And since you and I have a lot in common and there are millions of people like us, the PC industry thrived.
It helped, of course, that platforms and applications weren’t always backward-compatible and that new MIP-burning apps appeared with great regularity. But to some extent those times are past. Productivity applications have stalled somewhat and really powerful applications are moving to the cloud. But there’s something more, and that’s a robust industry of third-party upgraders offering to help us modernize the PCs we already own.
When my mid-2010 MacBook Pro hit its third birthday I could have bought a new one for $1200 but instead I upgraded the memory and hard drive, going from 4 gigs and 240 gigs of each to 16 gigs and a 1 TB hybrid drive. My MacBook was reborn! It helped, I must admit, that I purely by chance had bought the only 2010 model that could be upgraded to 16-gigs of RAM. The total cost was $300, I made the changes myself, and not a penny went to Cupertino.
Earlier this year I replaced the battery for $80 (it wasn’t dead but showing signs of distress and the new battery has 50 percent more capacity). And just this week I replaced my first mechanical component to actually wear out -- the keyboard. The replacement cost $25 with free shipping and included a new backlight that I didn’t actually need. This makes my total investment in the MacBook about $1605 for a device that has so far given me at least 10,000 hours of use. That works out to $0.16 per hour for my primary professional tool -- a tool that somehow supports five people and two dogs.
It’s the greatest bargain in the history of work.
That keyboard had to go. First the e-key began to fade where my fingernail had pounded it approximately 200,000 times per year. But the e, itself, never failed. That was left to the a and the t, which looked fine but came to work intermittently. I’d pound away at that God-damned a, especially, until I could stand it no more.
But replacing a MacBook Pro keyboard is not a simple task, which is why I waited so long. If you go to the Apple Store they’ll swap-out the entire top case of the computer for $300+ which is crazy for a device that’s only worth about that on Craigslist. But buying a used model on Craigslist isn’t good, either, because that keyboard will be five years old, also.
Nope, I had to dig-in and replace the keyboard myself.
Here’s the problem: in order to replace a MacBook Pro keyboard you have to remove and then replace SEVENTY-ONE TINY SCREWS. No sane person wants to do that, but it had to be done. It took me about two hours to accomplish thanks to the YouTube video that showed me exactly what to do.
Blame YouTube for enabling this DIY trend.
You don’t want to know the detritus I found under and in that keyboard. Suffice it to say that I’m surprised PCs don’t attract bedbugs. Maybe they do?
Following the keyboard replacement and giving my trackpad a bath in 99 percent alcohol, my MacBook is now running better than new, which is not to say it is running as well as a new model. The 2010 nVIDIA graphics aren’t very good for one thing. I have a Raspberry Pi that streams video better. The SD card reader has stopped working, too, but I never used that until last week and an external card reader works fine.
So I probably will buy a new computer -- two years from now. I hope Cupertino can wait.
I’ve been working on a big column or two about the Office of Personnel Management hack while at the same time helping my boys with their Kickstarter campaign to be announced in another 10 days, but then IBM had to go yesterday and announce earnings and I just couldn’t help myself. I had to put that announcement in the context you’ll see in the headline above. IBM is so screwed.
Below you’ll see the news spelled-out in red annotations right on IBM’s own slides. The details are mainly there but before you read them I want to make three points.
First, IBM’s sexy new businesses (cloud, analytics, mobile, social and security or CAMSS) aren’t growing -- and probably won’t be growing -- faster than its old businesses are shrinking and dying. This doesn’t have to be. IBM could carefully invest in some of those older businesses and become a much better company and investment.
Second is something that doesn’t immediately fall out of these slides but I think it should be said: from what I hear IBM’s analytics sales (the very essence of its Big Data strategy) have been dismal. Nobody is buying. And a third point that could be an entire column in itself is that Google’s two latest cloud announcements (support for Windows Server and broad release of its Kubernetes container manager) effectively blow out of the water IBM’s nascent cloud operation.
Sadly, IBM has already lost the cloud and analytics wars, they have yet to be even a factor in mobile, and social is a business that IBM has yet to even explain how they’ll make money. Of all these new businesses that will supposedly drag IBM out of the mess it’s currently in only data security has a chance, and that’s if they don’t blow that, too.
Al Mandel used to say "the step after ubiquity is invisibility" and man was he right about that. Above you’ll see a chart from the Google Computers and Electronics Index, which shows the ranking of queries using words like "Windows, Apple, HP, Xbox, iPad" -- you get the picture. The actual terms have changed a bit since the index started in 2004 as products and companies have come and gone, but my point here is the general decline.
Just as Al predicted, as technology has become more vital to our lives we’ve paradoxically become less interested, or at least do less reaching out. Maybe this is because technologies become easier to use over time or we have more local knowledge (our kids and co-workers helping us do things we might have had to search on before).
Whatever the reason, I think it is mirrored in the decline of specialist technology publications. What happened to BYTE Magazine? Actually the last editor of BYTE, my friend Rafe Needleman, is the new editor of Make Magazine (there were a number of steps in between for Rafe) so maybe there are technology search upticks like 3D printing and Raspberry Pi computers just as we yawn over Windows 10 or iOS 9.
Where it was once enough to be a user, maybe the geeks among us now need to be masters. It’s an ironic return not to the PC glory days of the 90s, but to the PC experimenter days of the 70s. Or so it seems. Whatever the reason, we’re certainly more blasé than we used to be about this stuff that has come to absolutely control our lives.
Weird, eh?
Photo Credit: Suzanne Tucker/Shutterstock
This is my promised third column in a series about the effect of H-1B visa abuse on US technology workers and ultimately on the US economy. This time I want to take a very high-level view of the problem that may not even mention words like "H-1B" or even "immigration", replacing them with stronger Anglo-Saxon terms like "greed" and "indifference".
The truth is that much (but not all) of the American technology industry is being led by what my late mother would have called "assholes". And those assholes are needlessly destroying the very industry that made them rich. It started in the 1970s when a couple of obscure academics created a creaky logical structure for turning corporate executives from managers to rock stars, all in the name of "maximizing shareholder value".
Lawyers arguing in court present legal theories -- their ideas of how the world and the law intersect and why this should mean their client is right and the other side is wrong. Proof of one legal theory over another comes in the form of a verdict or court decision. We as a culture have many theories about institutions and behaviors that aren’t so clear-cut in their validity tests (no courtroom, no jury) yet we cling to these theories to feel better about the ways we have chosen to live our lives. In American business, especially, one key theory says that the purpose of corporate enterprise is to "maximize shareholder value". Some take this even further and claim that such value maximization is the only reason a corporation exists. Watch CNBC or Fox Business News long enough and you’ll begin to believe this is the God’s truth, but it’s not. It’s just a theory.
It’s not even a very old theory, in fact, only dating back to 1976. That’s when Michael Jensen and William Meckling of the University of Rochester published in the Journal of Financial Economics their paper Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure.
Their theory, in a nutshell, said there was an inherent conflict in business between owners (shareholders) and managers, that this conflict had to be resolved in favor of the owners, who after all owned the business, and the best way to do that was to find a way to align those interests by linking managerial compensation to owner success. Link executive compensation primarily to the stock price, the economists argued, and this terrible conflict would be resolved, making business somehow, well, better.
There are many problems with this idea, which appears to be more of a solution in search of a problem. If the CEO is driving the company into bankruptcy or spends too much money on his own perks, for example, the previous theory of business (and the company bylaws) say shareholders can vote the bum out. But that’s so mundane, so imprecise for economists who see a chance to elegantly align interests and make the system work smoothly. The only problem is the alignment of interests suggested by Jensen and Meckling works just as well -- maybe even better -- if management just cooks the books and lies. And so shareholder value maximization gave us companies like Enron (Jeffrey Skilling in prison), Tyco International (Dennis Kozlowski in prison), and WorldCom (Bernie Ebbers in prison).
It’s just a theory, remember.
The Jensen and Meckling paper shook the corporate world because it presented a reason to pay executives more -- a lot more -- if they made their stock rise. Not if they made a better product, cured a disease, or helped defeat a national enemy -- just made the stock go up. Through the 1960s and 1970s, average CEO compensation in America per dollar of corporate earnings had gone down 33 percent as companies became more efficient at making money. But now there was a (dubious) reason for compensation to go up, up, up, which it has done consistently for almost 40 years until now we think this is the way the corporate world is supposed to work -- even its raison d’etre. But in that same time real corporate performance has gone down. The average rate of return on invested capital for public companies in the USA is a quarter of what it was in 1965. Sure productivity has gone up, but that can be done through automation or by beating more work out of employees.
Jensen and Meckling created the very problem they purported to solve – a problem that really hadn’t existed in the first place.
Maximizing shareholder return has given us our corporate malaise of today when profits are high (but are they real?) stocks are high, but few investors, managers, or workers are really happy or secure. Maximizing shareholder return is bad policy both for public companies and for our society in general. That’s what Jack Welch told the Financial Times in 2009, once Welch was safely out of the day-to-day earnings grind at General Electric: "On the face of it", said Welch, "shareholder value is the dumbest idea in the world. Shareholder value is a result, not a strategy… your main constituencies are your employees, your customers, and your products. Managers and investors should not set share-price increases as their overarching goal. … Short-term profits should be allied with an increase in the long-term value of a company".
Now let’s look at what this has meant for the US computer industry.
First is the lemming effect where several businesses in an industry all follow the same bad management plan and collectively kill themselves. We saw it in the airline industry in the 1980s and 90s. They all wanted to blame regulation, then deregulation, then something else. The result was decimation and consolidation of America’s storied airlines and the services of those consolidated companies generally sucks today as a result. Their failings made necessary Southwest, Jet Blue, Virgin America and other lower-cost yet better-service airlines.
The IT services lemming effect has companies promising things that can not be done and still make a profit. It is more important to book business at any price than it is to deliver what they promise. In their rush to sign more business the industry is collectively jumping off a cliff.
This mad rush to send more work offshore (to get costs better aligned) is an act of desperation. Everyone knows it isn’t working well. Everyone knows doing it is just going to make the service quality a lot worse. If you annoy your customer enough they will decide to leave.
The second issue is you can’t fix a problem by throwing more bodies at it. USA IT workers make about 10 times the pay and benefits that their counterparts make in India. I won’t suggest USA workers are 10 times better than anyone, they aren’t. However they are generally much more experienced and can often do important work much better and faster (and in the same time zone). The most effective organizations have a diverse workforce with a mix of people, skills, experience, etc. By working side by side these people learn from each other. They develop team building skills. In time the less experienced workers become highly effective experienced workers. The more layoff’s, the more jobs sent off shore, the more these companies erode the effectiveness of their service. An IT services business is worthless if it does not have the skills and experience to do the job.
The third problem is how you treat people does matter. In high performing firms the work force is vested in the success of the business. They are prepared to put in the extra effort and extra hours needed to help the business -- and they are compensated for the results. They produce value for the business. When you treat and pay people poorly you lose their ambition and desire to excel, you lose the performance of your work force. It can now be argued many workers in IT services are no longer providing any value to the business. This is not because they are bad workers. It is because they are being treated poorly. Firms like IBM and HP are treating both their customers and employees poorly. Their management decisions have consequences and are destroying their businesses.
At this point some academic or consultant will start talking about corporate life cycles and how Japan had to go from textiles to chemicals to automobiles to electronics to electronic components simply because of limited real estate that had to produce more and more revenue per square foot so it was perfectly logical that Korea would inherit the previous generation of Japanese industry. But that’s not the way it works with services, which have no major real estate requirements. There is no -- or should be no -- life cycle for services.
So evolution is not an option because there’s no place to evolve to. The IT industry has turned into a commodity business of high volume, lower margin products and services. The days of selling a $250,000 system for $1,000,000 and passing around big commission checks are gone.
Good management and business optimization are both essential and rare. You can’t succeed by merely saying you will solve your problems by selling more. You have to run your business a lot smarter. The way for an IT company to succeed is by being being smarter than the competition, not sneakier, dirtier, or less empathetic.
Empathy, what’s that?
IBM and HP (my go-to examples lately) are failing to recognize that a big part of their business has become a commodity. Calling it a low margin business and selling it off ignores the basic need for these companies to evolve and make serious changes to their business models. If the world is moving to low cost servers and you sell off your server business, what will you sell in the future?
Cloud computing is a prime example of a high volume, low margin, commodity service. If you don’t make the adjustments to operate as a commodity business, you won’t be able to succeed with selling cloud services. IBM and HP continue to cling to their 1990’s business model. Soon they will have no high margin products and services to sell, and they will no longer have any high volume products or services. Every time they sell a high volume, low margin business they paint themselves tighter in a corner.
A few weeks ago I was on a Southwest flight. I heard one Southwest employee say to some others "planes can’t make money if they are sitting on the ground". They all knew exactly what needed to be done and why. You rarely see that business awareness and focus in every employee of a company, yet it is common at Southwest. You don’t see it with most other airlines. Southwest knows the value in their service lies in transporting people. If their planes are not in the air, they’re not transporting people. Do HP’s or IBM’s whole organization understand the "value" of their service?
It is only a matter of time until a company emerges that truly understands the value of IT service, because that need isn’t going away. Companies are only as smart as the collective intelligence of all their workers. If all their workers understand the value and business model, they can be a formidable competitor. When that happens IBM and HP will be in serious trouble. IBM ignores 99 percent of its workforce and keeps them in the dark.
There was period when the whole airline industry acted stupid. A lot of them failed and it was pretty ugly. There was a period when the Detroit automakers suffered a major brain freeze. Japanese companies introduced cars that were much, much better and slowly eroded Detroit’s dominance in the industry. Today Toyota and GM trade places for the world’s largest car company. Who would have thought that would happen?.
We’re right on the edge of losing our computer industry. As the market moves to Intel servers, anyone can become a big player. Where does that leave HP and IBM? The quality of "services" is so terrible right now the market is hungry for a better provider. If one emerges in Asia where does that leave HP and IBM? When that new spunky company makes it to the CIO’s office HP and IBM will be in serious trouble.
Honest to God, these American companies think that can’t happen.
We are at a very dangerous period of time in computer history and the storied companies that made most of that history don’t even see it. That’s because they are fixated on the vision of their leaders and their leaders are fixated on visions of their own retirements coming an average of four years from today.
So look, for example, at Meg Whitman and Ginni Rometty. All the things they’re doing to "transform" their businesses are causing more harm than good. They really are not aligning their business models to evolving market conditions.
We’ve lost the consumer electronics industry, we’ve lost over half of the automotive industry, we’ve lost millions of manufacturing jobs, and we’re about to lose our computer industry, too.
But it doesn’t have to happen.
In 1989 when Sony bought Columbia Pictures for $4.3 billion, many in Hollywood thought the end of American entertainment hegemony had begun. But it didn’t happen. It didn’t happen because the value in Hollywood lies almost entirely in the people who work in the entertainment industry -- people who mainly lived at that time in Southern California. Sony, in turn, thought it was going to suck lots of profit out of Columbia but they couldn’t because a big star still cost $10 million per picture, a top director $5 million, etc. And don’t get me started on those Hollywood accountants! All Sony got was the skeleton of Columbia, not the heart or the blood.
Now look at the American IT industry in a similar light. American companies have been pretending to offer a superior product for a superior price while simultaneously cutting costs and cheating customers. Do you think IBM respects its customers? They don’t. But what if they did? What if IBM -- or any other US IT services company for that matter -- actually offered the kind of customer service they pretend they do? What if they solved customer problems instantly? What if they anticipated customer problems and solved them before those problems even appeared? You think that can’t be done? It can be done. And the company that can do it will be able to charge whatever they like and customers will gladly pay it.
True mastery, that’s what we’ve lost. No, we haven’t lost it: we threw it away.
This is the second of three columns relating to the recent story of Disney replacing 250 IT workers with foreign workers holding H-1B visas. Over the years I have written many columns about outsourcing (here) and the H-1B visa program in particular (here). Not wanting to just cover again that old material, this column looks at an important misconception that underlies the whole H-1B problem, then gives the unique view of a longtime reader of this column who has H-1B program experience.
First the misconception as laid out in a blog post shared with me by a reader. This blogger maintains that we wouldn’t be so bound to H-1Bs if we had better technical training programs in our schools. This is a popular theme with every recent Presidential administration and, while not explicitly incorrect, it isn’t implicitly correct, either. Schools can always be better but better schools aren’t necessarily limiting U.S. technical employment.
His argument, like that of Google and many other companies often mentioned as H-1B supporters, presupposes that there is a domestic IT labor shortage, but there isn’t. The United States right now has plenty of qualified workers to fill every available position. If there are indeed exceptional jobs that can’t be filled by ANY domestic applicant, there’s still the EB-2 visa program, which somehow doesn’t max-out every year like H-1B. How can that be if there’s a talent shortage? In truth, H-1B has always been unnecessary.
What the blogger misses, too, is the fact that the domestic IT workforce of today came into their jobs without the very educational programs he suggests are so important. There was no computer science major when I was an undergraduate, for example. For that matter, how many non-technical majors have been working for years as programmers? How many successful programmers never finished college or never attended college at all? I’m not arguing against education here, just pointing out that the IT job path isn’t always short and straight and the result is that the people who end up in those jobs are often more experienced, nuanced, and just plain interesting to work with. What’s wrong with that?
What’s wrong is politicians who can’t code or have never coded are arguing about how many technical workers can dance on the head of a pin, but they simply don’t know what they are talking about.
Now to the H-1B observations of my old friend and longtime reader who has been a CTO at several companies:
My first exposure to H1B was when I was consulting to multiple VC’s back in the dot-com era. Several VC’s I did work for, the portfolio managers would instruct/demand their portfolio companies hire H1B’s instead of Americans for 'common' jobs such as programmers, DBA’s (database administrators), network admins and even IT help desk people.
The reason of course was $$$. The H1B’s cost approx. 1/3rd or 1/4th the cost of the comparable American in same job.
I remember this one VC board meeting where the CEO of a portfolio company said the H1B’s in his company were complaining about their sub-standard pay, and one of the VC partners said, "Fuck them. Tell them if they don’t like it, we’ll toss their ass out, get another H1B to replace you and you’ll be on your way back to India".
Fast forward to mid- to late-2000s:
I learned (while working) at (an unnamed public technology company) a LOT about H1B. We had contracted with several of the Indian firms such as Infosys, Wipro, Tata, Impetus, TechMahindra for 'outsourcing' and 'offshoring' ordinary tech work like programming, dba’s, documentation, etc.
The rates were very enticing to any corporation: we were paying anywhere from $15/hour to a *max* of $28/hour for H1B folks from those Indian firms (which btw, had set up US subsidiaries as 'consulting/contractor firms' so that American companies were hiring "American workers").
The jobs we were hiring from TechMahindra, Wipro, etc., were jobs that American workers of same skillset and experience would be paid in the range of $80k-$170k (annual, which translates to $52-110/hour when you factor in benefits, medical, etc.). Quite a considerable difference in cost to the corporation.
At one point we had ~800 staff in India that worked for Infosys/Wipro/etc.) but had H1B 'project managers' onsite in the US from Infosys/Wipro/etc. to manage those armies of people in India (i.e. -- deal with language issues, scheduling, etc.).
I got to know some of the H1B’s that were in the US working for us. I asked them, "How can you afford to live here on $15/hour?" The answer was they were living in group homes (e.g. -- 8 guys would rent a townhouse and pool their money for food, etc.), plus had "no life" outside of work.
To which I’d ask, "why are you doing this?" The answer was "it’s better than what we can get at home (India)" and they would manage to save some money. But more importantly, they were getting valuable experience for when they would return to India, they were highly sought after due to their experience in the US.
I know of one case for certain when our intellectual property (software source code) found its way into other companies, by pure coincidence of course, where the other companies were using the same Indian firms.
IMHO, the intent of the H1B program is valid and correct. The implementation and administration are horrible.
The politicians have no clue.
The government administrators who manage the H1B program, and especially the overseers who review the cases on whether (the visa applicant) really has skills that are unique and uncommon, are not educated or experienced enough to make such determinations.
I read some of the forms that were filled out: throw in a lot of techno babble and terms, and the government admin is NOT going to be able to challenge NOR understand it.
The politicians say they’ve addressed the holes by tightening-up the process. But if the first line of defense are the admins who review/determine if the H1B position really is unique and uncommon and they don’t know the difference between C++ and C#, we’ve accomplished nothing.
Disney has been in the news recently for firing its Orlando-based IT staff, replacing them with H-1B workers primarily from India, and making severance payments to those displaced workers dependent on the outgoing workers training their foreign replacements. I regret not jumping on this story earlier because I heard about it back in March, but an IT friend in Orlando (not from Disney) said it was old news so I didn’t follow-up. Well now I am following with what will eventually be three columns not just about this particular event but what it says about the US computer industry, which is not good.
First we need some context for this Disney event -- context that has not been provided in any of the accounts I have read so far. What we’re observing is a multi-step process.
Disney had its IT operation in-house, then hired IBM to take it over with the usual transfer of employees followed by layoffs as IBM cut costs to make a bigger profit. Ultimately Disney fired IBM, hired (or re-hired) a new IT staff, which is the group now replaced by H-1Bs employed by an Indian company essentially offering the same services that were earlier provided by IBM. This more detailed story means, for one thing, that the workers being replaced by H-1Bs have for the most part worked for Disney for less than three years.
So the image of some graybeard, now without a job, who had been working for Disney since the days of punched cards simply isn’t accurate. It probably also explains to some extent the Disney severance offer of 10 percent of each worker’s annual salary, which may well have been more than workers with so little seniority might have expected. It’s possible Disney was being generous.
But Disney was also being stupid. This is shown in part by its tone-deaf response to the story, which they clearly weren’t expecting or prepared for.
Some of this comes down to the difference between labor and people. Disney may have been trying to reduce labor costs but did so by dumping people. Journalists and readers get that, some executives and most economists don’t. These aren’t just numbers.
But in Disney’s specific case there’s another underlying issue that has to be taken into account, which is IT mis-management on an epic scale. I have been talking with IT folks I trust from Orlando -- both former Disney workers and others just familiar with the local tech scene -- and the picture of Disney IT that emerges is terrible. Disney turned to IBM not so much to save money as to save IT. But as you can guess, an essentially un-managed IBM contract was viewed by Armonk as a blank check. Disney started complaining, but it is my understanding that at least some of the trouble had to do with Disney’s own communication problems -- problems that didn’t improve once IBM was fired.
"I remember there was a big crisis," recalled a former IBMer who worked on the Disney account. "There was a massive backlog of new service requests, hundreds. It turned out most of the service requests had already been processed and had been waiting on Disney for months, sometimes over a year, for approval. In the interim Disney had changed their request process. So we moved the finished requests to the new system. A few months later most of those requests were still awaiting Disney’s approval".
What we’re likely to see in Orlando, then, is a cycle. Unless Disney cleans house -- really cleans house -- at the management level we’ll see more contractors coming and going.
I blame Disney CEO Bob Iger for not knowing what’s happening at his own company, which I’m told now thinks that moving everything to the cloud is the answer. It isn’t. And with Disney such an iconic brand I’m quite sure there are technical people in Russia, China, North Korea and elsewhere who know very well the company’s vulnerability.
One reader of this column in particular has been urging me to abandon for a moment my obsession with IBM and look, instead, at his employer -- Hewlett Packard. HP, he tells me, suffers from all the same problems as IBM while lacking IBM’s depth and resources. And he’s correct: HP is a shadow of its former self and probably doomed if it continues to follow its current course. I’ve explained some of this before in an earlier column, and another, and another you might want to re-read. More of HP’s problems are covered in a very fine presentation you can read here. Were I to follow a familiar path at this point I’d be laying out a long list of HP mistakes. And while I may well do exactly that later in the week, right here and now I am inspired to do what they call in the movies "cutting to the chase", which in this case means pushing through bad tactics to find a good strategy. I want to lay out in a structural sense what’s really happening at both HP and IBM (and at a lot of other companies, too) so we can understand how to fix them, if indeed they can be fixed at all.
So I’ll turn to the works of Autodesk founder John Walker, specifically his Final Days of Autodesk memo, also called Information Letter 14, written in 1991. You can find this 30-page memo and a whole lot more at Walker’s web site. He has for most of this century lived in Switzerland where the server resides in a fortress today. We may even hear from Walker, himself, if word gets back that I’ve too brazenly stolen his ideas. Having never met the man, I’d like that.
What follows is an incredibly stripped-down version of Information Letter 14, nixing most of the Autodesk-specific bits and applying the underlying ideas to lumbering outfits like HP and IBM. I’m just one of many people to be inspired by this memo, by the way. It was the basis of Bill Gates’s The Coming Internet Tidal Wave memo from the mid-1990s that led to Microsoft reforming itself to take on Netscape.
When major shifts occur in user expectations, dominant hardware and software platforms, and channels of distribution, companies which fail to anticipate these changes and/or react to them once they are underway are supplanted by competitors with more foresight and willingness to act…
This of course describes both HP and IBM -- generally trying to use their corporate mass to lead from behind, which even they know doesn’t work.
Today (Autodesk, HP, IBM -- you name a successful company) is king of the mountain, but it is poised precariously, waiting to be pushed off by any company that seizes the opportunity and acts decisively. One of the largest unappreciated factors in Autodesk’s success has been the poor strategy and half-hearted, incompetent execution that characterized most of our competitors in the past. But betting the future of our company on this continuing for another decade is foolish, a needless prescription for disaster…
You cannot lead an industry by studying the actions of your competitors. To lead, you must understand the mission of your company and take the steps which, in time, will be studied by other, less successful companies seeking to emulate your success…
Autodesk is proud of its open door policy, and counts on it to bring the information before senior management that they need to set the course for the company. Such a policy can work only as long as people believe they are listened to, and that decisions are being made on grounds that make sense for the long-term health of the company. Rightly or wrongly, there is a widely-held belief which I’m articulating because I share it, that management isn’t hearing or doesn’t believe what deeply worries people throughout the company, and isn’t communicating to them the reasons for the course it is setting. This is how bad decisions are made…
NOBODY in the executive suites at either HP nor IBM is listening to the troops. No good ideas are welcome in either company at this point, which is stupid.
Now Walker takes us to the crux of the corporate problem faced by both HP and IBM, explaining it in terms of Wall Street’s obsession with profit margins.
Investors and analysts have learned to watch a company’s margins closely. Changes in margin are often among the earliest signs of changes in the fortunes of a company, for good or for ill. When sales, earnings, and margins are rising all together, it usually means the market for the company’s products is growing even faster than the company anticipated; the future seems bright. When margins begin to decline, however, it can indicate the company has let spending outpace sales. When competition begins to affect the company, or even when a company fears future competition, it may spend more on promotion, accelerate product development, and offer incentives to dealers and retail customers–all reflected in falling margins.
But high margins aren’t necessarily a good thing, particularly in the long term. One way to post high margins is by neglecting investment in the company’s future. Any profitable company can increase its earnings and margin in the short run by curtailing development of new products and improvements to existing products, by slashing marketing and promotional expenses, and by scaling back the infrastructure that supports further growth. Since there’s a pipeline anywhere from six months to several years between current spending and visible effects in the market, sales aren’t affected right away. So, with sales constant or rising slowly and expenses down, earnings and margin soar and everybody is happy.
For a while, anyway. Eventually momentum runs out and it’s obvious the company can’t sustain its growth without new products, adequate promotion, and all the other things that constitute investment in the future of the business. It’s at that point the company becomes vulnerable to competitors who took a longer view of the market.
One of the most difficult and important decisions the management of a company makes is choosing the level of investment in the future of the business. Spend too little, and you’re a hero in the short term but your company doesn’t last long. Spend too much, and the company and its stock falls from favor because it can’t match the earnings of comparable companies…
This is the pit into which HP and IBM have fallen. They want to maintain margins to keep Wall Street happy, but the easiest way to do that is by cutting costs. Eventually this will be visible in declining sales, which IBM has now experienced for three straight years. Yet with a combination of clever accounting and bad judgement even declining sales can be masked… for awhile.
Let’s turn now to what happens to the money that remains after all the bills and taxes have been paid. A small amount is paid back to the shareholders as dividends, but the overwhelming percentage goes into the corporate treasury -- the bank account -- the money bin. When a company runs the kind of margins Autodesk does for all the years we have, that adds up to a tidy sum: in Autodesk’s case (in 1991) more than $140 million. When thinking about the future of the company, what can and can’t be done with that cash is vital to understand.
At the simplest level, the money belongs to the company and management can do anything it wishes within the law: give some back to the stockholders as a special dividend…, buy other companies…, buy real estate or other capital goods for the company…, or just invest the money, collect the income, and add it to earnings…
But here’s the essential point. When you spend a dollar, whether to hire a programmer, buy a truck, run an ad, or take over Chrysler, it it doesn’t matter whether it came from the bank account or from current sales… Regardless of how prudent you’ve been piling up money over the years, the moment you spend any of it in your business, it’s just as if you increased your day to day operating budget. That means rising expenses without an increase in sales, and that translates into… falling margins.
About the only thing you can do with the money that doesn’t cause margins to fall, other than giving it back in dividends, is investing it in other companies. When you make an investment, that’s carried on the books as capital. As long as you don’t have to write the investment off, it doesn’t affect your operating results…
The accounting for money in the bank, then, can create a situation where pressing company needs remain unmet because the expenditures required would cause margins to fall, yet at the same time, the company is actively investing its cash hoard outside the company, in other businesses, because those investments do not show up as current operating expenses. Thus, the accumulated earnings of a company, the ultimate result of its success, can benefit any venture except the one that made the money in the first place…
This explains why IBM is always buying little companies then squeezing them, often to death, for profits. Buying these companies is an investment and therefore not a charge against earnings. But having bought the companies, spending any more money on them is not an investment and hurts earnings. IBM could develop the same products internally but that would appear to cost money. So instead they try to buy new products then deliberately starve to death the companies that created them. In accounting terms this makes perfect sense. To rational humans it is insane. Welcome to IBM.
Management strives, quarter by quarter, to meet the sales and earnings expectations of the Wall Street analysts and to avoid erosion in the margin which would be seen (rightly) as an early warning, presaging problems in the company. In the absence of other priorities this is foremost, as the consequences of a stumble can be dire…
But management has a more serious responsibility to the shareholders; to provide for the future of the company and its products. Focusing exclusively on this quarter’s or this year’s margins to the extent that industry averages dictate departmental budgets for our company is confusing the scoreboard with the game…
I attended a meeting in early 1989, where I heard a discussion of how, over the coming year, it would be necessary for Autodesk to reduce its sales and marketing budget to lower and lower levels. Walking in from the outside, I found this more than a little puzzling. After all, weren’t we in the midst of a still-unbroken series of sales and earnings records? Wasn’t this year expected to be the best ever? Weren’t we finally achieving substantial sales of AutoCAD to the large companies and government?
True, but there was this little matter of accounting, you see. From time immemorial, most copies of AutoCAD had been sold by dealers. To simplify the numbers, assume the retail price of AutoCAD is $1000, the dealer pays $500 for it, and all sales by dealers are at the full list price. So, for every copy of AutoCAD that ends up in a customer’s hands, Autodesk gets $500 and the dealer gets $500. Autodesk reports the $500 as Sales, deducts expenses, pays taxes, and ends up with earnings, say $125, corresponding to a margin of 25 percent.
But suppose, instead, we sell the copy of AutoCAD to a Fortune 500 account -- Spacely Sprockets, perhaps? In that case, the numbers look like this (again simplified for clarity). Autodesk ships the copy of AutoCAD directly to the customer and invoices Spacely Sprockets for the full list price, $1000. However, the sale was not made directly by Autodesk; the order was taken by one of our major account representatives, the equivalent of dealers for large accounts. When we get the check, we pay a commission to this representative. Assume the commission is $500.
Regardless of who bought the copy of AutoCAD, the financial result, the fabled "bottom line", is the same. There’s one fewer copy of AutoCAD on our shelf, and one more installed on a customer’s premises. Autodesk receives $500, and our dealer or representative gets $500. But oh what a difference it makes in the accounting! In the first case, where Autodesk sold the copy of AutoCAD to the dealer, that was the whole transaction; whatever happened to the copy of AutoCAD after the dealer paid for it has no effect on Autodesk’s books. Autodesk sells, dealer pays, end of story. But in the second case, when Autodesk sells to Spacely Sprockets, that appears on Autodesk’s ledger as a sale of AutoCAD for $1000. The instant the $1000 shows up, however, we immediately cut a check for the commission, $500, and mail it to the representative, leaving the same $500 we’d get from the dealer. Same difference, right?
Not if you’re an accountant! In the first case, Autodesk made a sale for $500 and ended up, after expenses and taxes, with $125, and therefore is operating with a 25 percent margin (125/500). In the Spacely sale, however, the books show we sold the product for $1000, yet wound up only with the same $125. So now our margins are a mere 12.5 percent (125/1000). And if we only kept $125 out of the $1000 sale, why that must mean our expenses were 1000-125=875 dollars! Of that $875, $375 represent the same expenses as in the dealer sale, and the extra $500 is the representative’s commission which, under the rules of accounting, goes under "Cost of sales".
Or, in other words, (the money) comes out of Autodesk’s marketing and sales budget.
That’s why the marketing budget had to be cut. To the very extent the major account program succeeded, it would bankrupt the department that was promoting it. If we were wildly successful in selling AutoCAD into the big companies, Autodesk would make more sales, earn more profits, then be forced to cancel marketing program after marketing program as the price of success! All because the rules of accounting would otherwise show falling margins or a rising percentage of revenue spent on "cost of sales".
The purpose of this discussion is not to complain about the rules of accounting. You have to keep score somehow… Instead, what disturbed me so much about this incident was the way management seemed to be taking their marching orders from the accounting rules rather than the real world. Budgets were actually being prepared on the assumption that marketing and sales efforts would have to be curtailed to offset the increased "cost of sales" from the major account sales anticipated over the year. Think about it: here we were planning for what was anticipated to be and eventually became the best year in Autodesk’s history, and yet were forced to cut our marketing and sales as a direct consequence of its very success. Carried to the absurd, if the major account program astounded us and began to dwarf dealer sales, we would have to lay off the entire marketing and sales department to meet the budget!
This is another reason why HP and IBM have taken to ruthlessly cutting expenses, which is to say people. These aren’t huge one-time layoffs to lay the groundwork for true corporate re-orgs, they are exactly as John Walker feared: labor reductions driven purely by accounting rules. For the people of HP and IBM they are death by a thousand cuts.
The only way to use retained earnings without directly increasing expenses is by investing it… Unfortunately, unless the goals and priorities of Autodesk’s current Business Development effort have been seriously miscommunicated, it seems to me embarked on a quixotic search for something which in all probability does not exist: "The Next AutoCAD…’" In other words, we’re betting the future growth of our company on our ability to consistently identify products which sell for more than any other widely-distributed software and will be sold exclusively by a distribution channel which has demonstrated itself incapable of selling anything other than AutoCAD.
What’s wrong with this picture?
When you adopt unrealistic selection criteria, you find unattractive alternatives. The desiderata that Autodesk is seeking in the products on which the company’s future will be bet would have excluded every single successful product introduced since 1982 by Microsoft, Lotus, Ashton-Tate, Word Perfect, and Borland. What are the odds Autodesk will find not one, but several products that these companies have missed?
You can always find an investment that meets your criteria, but if your criteria are out of whack with reality, you might as well blow your money at the track where at least you get to smell the horses…
"The next AutoCAD" here could just as easily mean the next IBM 360, the next DB2, the next LaserJet printer, except those kind of opportunities don’t come along very often and as companies get bigger and bigger their successes are supposed to get bigger, too, which usually isn’t even possible, leading to the very corporate decline we are seeing in both companies. "The next AutoCAD" could mean Ginni Rometty’s favorites -- cloud, analytics, mobile, social, and security, except with IBM a lesser player in every one of those new markets, what are the chances of being successful with all of them? Zero. With one of them and a Manhattan Project (or IBM 360) effort? Pretty good, but not one of these segments by itself can be a $100 billion business.
Whether it’s Meg Whitman or Ginni Rometty, the problems these executives face are the same and are almost equally impossible. Neither woman can pull a Steve Jobs turnaround because Steve’s task was easier, his company was already on its knees and vastly smaller than either HP or IBM. So stop comparing these behemoths to Apple circa 1997. A better comparison would be to Dell.
By taking his company private Michael Dell changed the game, eliminated completely Wall Street pressure and influence, and dramatically increased his chances of saving his company. Why haven’t Meg and Ginni thought of doing the same? Why aren’t they? There’s plenty of hedge fund money to enable the privatization of both companies. But the hedge funds would immediately fire the current CEOs, which is probably why this doesn’t happen.
Ginni Rometty and Meg Whitman appear to be more interested in keeping their jobs than in saving their companies.
On June 8th at the Apple World Wide Developer Conference (WWDC), CEO Tim Cook will reportedly introduce a new and improved Apple TV. For those who live under rocks this doesn’t mean a television made by Apple but rather a new version of the Apple TV set top box that 25 million people have bought to download and stream video from the Internet. But this new Apple TV -- the first Apple TV hardware update in three years -- will not, we’re told, support 3840-by-2160 UHD (popularly called 4K) video and will be limited to plain old 1920-by-1080 HD. Can this be true? Well, yes and no. The new Apple TV will be 4K capable, but not 4K enabled. This distinction is critical to understanding what’s really happening with Apple and television.
First we need to understand Apple’s big number problem. This is a problem faced by many segment-leading companies as they become enormous and rich. The bigger these companies get the harder it is to find new business categories worth entering. Most companies, as they enter new market segments with new products, hope those products come to represent at least five percent of their company’s gross revenue over time. The iPhone, for example, now drives more than 60 percent of Apple’s revenue. Well the Apple TV has been around now for a decade and has yet to approach that five percent threshold, which is why they’ve referred to the Apple TV since its beginning as a hobby.
Let’s say Apple sells five million Apple TVs per year at $79 wholesale for gross hardware revenue of just under $400 million annually. While $400 million sounds like a lot, for a company with Apple’s fiscal 2014 sales of $182 billion, it’s at best a rounding error, if that -- just over two tenths of a percent of total sales. So in an MBA textbook sense the Apple TV wasn’t (and isn’t) worth doing. The business simply isn’t big enough to bother.
But this is Apple, a company that loves to redefine product categories. And by definition every new product category starts at zero. So if Apple wants to start anything truly new it will have to start small, which it did with the Apple II, Macintosh, iPod, iTunes, iPhone, iPad, and the Apple TV.
Everybody knows a new Apple TV is coming but the press reports to date have had very few details other than the fact that the box won’t support 4K. You know Apple had to deliberately leak that one detail for some strategic reason. So why introduce a new Apple TV at all if the performance being asked of it (decoding H.264 1080p video) hasn’t changed? There are only two reasons to do a new Apple TV under these circumstances: 1) to take cost out of the product, making it cheaper or increasing profit margins, or; 2) the technical requirements for the box actually have changed quite a bit but for some reason Apple won’t be immediately asking the new box to do much more than the old box already does. Apple is likely motivated for both reasons because competitive products like Google’s ChromeCast ($35) and Amazon’s Fire TV Stick ($39) have created something of a set top box (or stick) price war and it’s in Apple’s strategic interest for the Apple TV to support 4K video ASAP no matter what the company says about 4K at the WWDC.
For Apple to do what it has always done, the company must change the game. It can’t go head-to-head on price so that means dramatically increasing quality of service for close to the old (higher) price. Add to this Apple’s need for new product categories and new hits -- especially really, really big hits that will make meaningful revenue for the world’s most valuable public company.
What I think will happen at the WWDC is Apple will announce a spectacular new Apple TV -- the most powerful streaming box the world has ever seen -- wow developers with its potential and beautiful user interface, but will for the moment limit the features to not much more than the old Apple TV could provide, though with the addition of true streaming. Apple has a difficult path to follow here, you see, because it needs to inspire developers to support and extend the new box while, at the same time, creating a video content ecosystem that gets shows from video producers, broadcast and cable networks, and movie studios that have come to inherently distrust Apple as a destroyer of record companies.
If Apple were to throw a completely un-throttled Gen-4 Apple TV on the market, it could cost Cupertino its chance to tie-up long-term content deals to feed those boxes, limiting their total success. Apple needs this new Apple TV to do much more than run Netflix and Hulu, because Apple needs billions and billions in new revenue.
Put simply, Apple wants to own television. We’re not talking about broadcast TV or cable TV or even Over-the-Top streaming TV. With the new Apple TV, Apple wants to own it all.
So at WWDC it’ll show the new box doing anything that Chromecast or Roku can do with the addition of iTunes and two new streaming services -- music and live TV. Apple won’t at first have every TV network and local station on its service (neither does Hulu, remember, which lacks CBS), nor every cable network, but it’ll have a credible solution aimed at cord cutters with superior performance and a price that’s higher than Netflix, Roku or Amazon Prime, but the same or slightly lower than basic cable. It’ll create an ecosystem that works and works reliably and over time will sell millions more Apple TVs and sign-on many more networks and studios.
Then, in 2016, will come a surprise software upgrade with the switch to H.265 and 4K. Apple has to beat to 4K the cable companies and broadcast networks if it has a hope of displacing those industries, which -- along with day-and-date streaming of 4K movies -- are Apple’s ultimate goals.
In the US alone these three video entertainment channels add up to about $90 billion in revenue annually. Add the rest of the world to that and we’re talking about $200+ billion. THAT’S Apple’s target and it’s Cupertino’s goal to get a majority of that action as well as selling ultimately 50 million Apple TVs per year.
Apple won’t be alone in this effort. By deciding not to sell its own 4K TV and by creating a service that will drive 4K TV sales, Apple has bought the friendship of every big screen TV maker.
Industries are most ripe for disruption during period of technical transition, remember, so this switch from HD to 4K may be Apple’s only chance to snatch and grab.
Apple can do it, too, with luck and technology and the willingness to spend a LOT of money to make even more money. But it all depends on this upcoming WWDC and introducing the Apple TV in a way that’s exciting yet not intimating to potential partners.
Now let’s end with some pure speculation (as if what I’ve written above isn’t speculation enough). If I were running Apple here’s how I would accomplish this delicate task. I’d skip 4K completely and go to 5K. Remember Apple has been selling 5K iMacs for months now. Then I’d go (remember I’m channeling Tim Cook) to every TV network and movie studio and license from them the exclusive 5K rights for their content. This would be a bit like when Mark Cuban started HD-Net before many people had HDTVs or cable channels were even offering HD signals. The networks would all sign-on because they’d see it as money for nothing since there are no 5K TVs. Maybe Apple will introduce one after all, but that’s still a very small niche play, especially since Apple would probably have to build its own super-resolution conversion system just to take available content that high (35mm film, for example, is at best 4K).
"That Tim Cook, what a maroon!" the networks would say, counting their loot. And the cable company execs, who are already finding it hard enough just to do 4K, would roll their eyes.
And then, having tied-up the 5K rights, Apple would reveal that the 5K Apple TV can down-convert to 4K for "degraded" displays, the whole point having been tying-up the rights, not really streaming 5K.
If this comes to pass, remember you read it here first.
This is Sadie the Dog wearing her new Apple Watch. The watch actually belongs to my young and lovely wife, Mary Alyce, but she was unwilling to be photographed this morning while Sadie will pose anytime, anywhere. This is the Sport model of the Apple Watch in space gray with a black band. What makes this picture interesting is the watch was delivered last Friday two weeks early.
I ordered the watch on the first day Apple was taking orders but didn’t do so in the middle of the night so I missed the first batch of watches that were delivered in April. It was promised for delivery June first. Since then there have been stories about faulty sensors and other suggestions that watch deliveries might be later than expected -- stories that I’d say are belied by this early delivery.
I’m nobody special to Apple so this is not a perk just because I ordered it. So if you have been handicapping Apple earnings based on watch supply glitches, I think you should stop.
Sadie now has to learn to tell time or Siri has to learn how to bark.
And for those who are wondering, Sadie is a five year-old Carolina Dog, also called an American Dingo.
Among the great business innovations of the Internet era are Kickstarter and the many similar crowdfunding sites like IndieGoGo. You know how these work: someone wants to introduce a new gizmo or make a film but can only do so if you and I pay in advance with our only rewards being a possible discount on the gizmo or DVD. Oh, and a t-shirt. Never before was there a way to get people -- sometimes thousands of people -- to pay for stuff not only before it was built but often before the inventors even knew how to build it. From the Pebble smart watch to Veronica Mars, crowdfunding success stories are legion and crowdfunding failures quickly forgotten. I’ve been thinking a lot about crowdfunding because my boys are talking about doing a campaign this summer and I have even considered doing one myself. But it’s hardly a no-brainer, because a failed campaign can ruin your day and damage your career.
From the outside looking-in a typical Kickstarter or IndieGoGo campaign is based on the creator (in this case someone like me, not God) having a good idea but no money. If the campaign is successful this creator not only gets money to do his or her project, they get validation that there’s actually a market -- that it’s a business worth doing. About 80 percent of crowdfunding campaigns come about this way.
The other type of crowdfunding campaign isn’t so overtly about money. My three sons, for example, have an idea for a summer business. It’s a great idea they came up with all on their own and you’ll hear more about it here after school ends on June 5th. But the amount of capital required to do their business isn’t actually that great. In fact it’s well within the investment capability of their old Dad after he’s had a few drinks. Yet still they are considering a crowdfunding campaign, in this case as a sales channel. People come to Kickstarter and IndieGoGo looking for projects to spend money on, which in the view of my sons identifies those folks as customers. So the boys plan to launch a campaign with modest funding goals they are sure to reach, but mainly they want to be noticed by their ideal customers, who happen to be crowdfunding junkies. Smart kids.
I’m sure this trend of seeing Kickstarter as an alternative to Amazon, eBay, or Etsy is common. Some campaigns scream that. When it looks like the video cost more to produce than the campaign is seeking, that’s a key. Or when the campaign is from an outfit that’s already successful and ought to have at least that much money in the bank. Pebble made its mark in crowdfunding, but did they really need to use it for their follow-on smart watches? No, but it was probably a cheaper channel -- one that was proven and effectively self-financing, too.
But a lot of crowdfunding campaigns fail. Sometimes the idea or the product are just, well, stupid. Often the people behind the campaign are looking for way too much money. Remember if it’s a Kickstarter campaign and you miss your goal you get nothing, so it’s better to aim low and over-achieve. The key differentiator with IndieGoGo is you get whatever money is raised even if the target is missed. Having studied a lot of failed campaigns, though, I think most fail because the people asking for the money don’t do a very good or thorough job of explaining themselves. Just having a good idea is not enough.
I’m the kind of guy who might be successful with a crowdfunding campaign, I’ve been told. My name is fairly well-known. I have a track record and an established audience of readers. And while some of those readers hate me, most don’t. Using this blog I could promote the crowdfunding campaign and reinforce it. Using my so-called communication skills I could make the video and campaign web page fun to read and compelling. "For you, Bob, it would be easy money".
Not so fast.
Here’s my dilemma, for which I need your advice. As you may remember I’ve been working on a TV series called Startup America about tech startups and their founders -- what really works and what doesn’t. It’s not Shark Tank, it’s better, because the show’s not just about the deal but also about the execution and the outcome. And in our case the companies (you helped pick them, remember) are all real. The series will be carried next season on PBS, the producing station is WNET in New York, and our main underwriter is Salesforce.com (thank you, Marc Benioff). Where the dilemma arises is PBS, being risk-averse, has only ordered a certain number of episodes (not a full 10-show season). If I give them more episodes for the same money they’ll air them so that’s what I’d like to do. But I’ve already spent my budget as planned on the initial episodes, so I’ve been considering a crowdfunding campaign to pay for making a couple more, which ought to be fairly cheap to do since they are mostly shot already. Should I do it?
My gut says no. It’s probably better to try to find another underwriter to go along with Salesforce.
Here’s my thinking. Kickstarter campaigns are usually go or no-go but mine wouldn’t be and that’s confusing. The series will air no matter how much money is raised, we’ll just present fewer episodes. So I fear there will be a lack of urgency and nobody will care. Should I risk the goodwill I’ve built up over the years by asking for money?
You tell me.
This past week a very large corporation on the east coast was hacked in what seems to naive old me to be a new way -- through its corporate phone system. Then one night during the same week I got a call from my bank saying my account had been compromised and to press #4 to talk to its security department. My account was fine: it was a telephone-based phishing expedition. Our phone network has been compromised, folks, and nobody with a phone is safe.
Edward Snowden was right we’re not secure, though this time I don’t think the National Security Agency is involved.
Here’s how this PBX hack came down. Step one begins with looking for companies that have outsourced their IT help desk to a third party company, preferably overseas. There are today many, many such companies and it is easy to find them and to find out who is running their offsite or offshore help desk.
Step two is robocalling at night into the corporate phone system, punching-in each possible extension number. Live and dead extensions are mapped respectively and any voicemail greetings that are encountered are mined for the user’s name.
Step three happens during normal business hours, not at night. An employee of the target company is called at their desk by someone claiming to be from the outsourced help desk company. The incoming caller ID is spoofed to look right, the caller addresses the employee by name, it all feels legit. "I’m from the (outsourcing company name) IT help desk", the Bad Guy says, "and we’re having an issue with the network, possibly originating at your workstation, so I need you to: 1) install a software tool (malware, virus, etc.) or; 2) allow a remote access session so I can fix the problem".
It’s social engineering and it’s happening all over the place.
My call from the bank was different. I don’t remember if they said my name or not, but I am a current customer. A friend of mine who faced a similar experience recently was called about an account he had closed but I wasn’t so lucky. I was really tempted to press #4 but precisely because I’d heard of my friend’s experience just the day before, I didn’t. Instead I logged-in to my online banking account where there were no alerts and nothing seemed amiss. My bank can text me if there’s a problem but it hadn’t, and no money seemed to be missing. Then I called the number on the back of my ATM card to talk to the bank security department and it was closed. The call center was supposed to be open until 10PM local time and it was only 8:15. Could it have been breached and a zillion numbers like mine stolen so quickly?
I called back the next day, the bank said there had been no problem with my account, but it couldn’t explain why the call center was down.
This was Bank of America, by the way.
We’ve lost control of our phone network. I’m not lobbying here for a return to the AT&T monopoly of pre-1983, but what we have now is not safe. Haven’t you noticed the uptick in sales calls to your number that you thought was on the National Do Not Call Registry? That registry, and the law that created it, are no longer enforceable. The bad guys won but nobody told us. They are operating from overseas and can’t be traced. If they steal our money it can’t be traced, either.
What do you think can be done about this problem? I have some ideas, what are yours?
Last week Amazon.com was the first of the large cloud service companies other than Rackspace to finally break out revenue and expenses for its cloud operation. The market was cheered by news that Amazon Web Services (AWS) last quarter made an operating profit of $265 million with an operating profit margin of 19.6 percent. AWS, which many thought was running at break-even or possibly at a loss, turns out to be for Amazon a $5 billion business generating a third of the company’s total profits. That’s good, right? Not if it establishes a benchmark for typical-to-good cloud service provider performance. In fact it suggests that some companies -- IBM especially -- are going to have a very difficult time finding success in the cloud.
First let’s look at the Amazon numbers and define a couple terms. The company announced total AWS sales, operating profit, and operating profit margins for the last four quarters. Sales are, well, sales, while operating profit is supposed to be sales minus all expenses except interest and taxes (called EBIT -- Earnings Before Interest and Taxes). Amazon does pay interest on debt, though it pays very little in taxes. Since tax rates, especially, vary a lot from country to country, EBIT is used to help normalize operating results for comparing one multinational business with another.
There’s another figure that wasn’t reported and that’s gross profit margin -- simply the ratio of the Cost of Goods Sold (COGS -- in this case the cost of directly providing the AWS service) to the revenue from customers paying for that service. Gross margins are always higher than operating margins because they involve fewer expenses. As an example IBM’s operating margins as a total corporation are also 19 percent-and-change just like AWS’s while IBM’s gross margins are just over 50 percent. This doesn’t mean that gross margins for AWS have to be similar to those of IBM, but what it strongly suggests is that cloud gross margins in the real world aren’t typically in the stratospheric range of pure software companies where 80+ percent is common.
These numbers contrast sharply with experts who have suggested in the recent past that cloud computing is a high-margin business with gross margins of 70-90 percent. Here’s a quote on this subject from a story last year in Re/Code:
So how profitable could the cloud be? IBM doesn’t yet disclose the gross margins of its cloud operations, but it’s worth looking around at other companies for comparisons. IBM’s closest competitor is Amazon, which doesn’t break out the financials of its Amazon Web Services unit. Educated guesses have pegged the size of AWS at bringing in $5 billion in revenue at a gross margin of 90 percent or higher. A gross margin of that size would equal that of IBM’s software unit, which as of yesterday was 89 percent.
No way AWS gross margins are that high and last week’s numbers prove it. For those who love to dive into these numbers it should be pointed out that the reported AWS figures are actually significantly lower if adjusted for accounting sleight-of-hand. AWS specifically excludes from its results stock-based employee compensation ($407 million for Amazon as a whole with $233 million of that attributed to Technology and Content of which AWS is a part) and a mysterious $44 million in other operating expenses that weren’t attributed by Amazon to any particular division. For a true Generally Accepted Accounting Principles (GAAP) earnings analysis these extra costs should be factored-in. If we charge a third of the Technology and Content stock-based compensation to AWS ($77 million) and a sales-adjusted eight percent of the mystery $44 million (~$3.5 million), that brings AWS operating profit down to around $175 million and operating margin to about 13.5 percent, which coincidentally is not far off from a similar number at Rackspace.
This does not at all mean that cloud computing is a bad business to be in. If you are a Microsoft, Google, or Amazon it’s a business you absolutely have to be in. If you are Rackspace it’s all you do. But that doesn’t mean cloud is likely to help your highly-profitable company look even better to Wall Street. Just the opposite, in fact.
Looking at IBM for example (or Oracle or HP, which are in very similar situations) Cloud is a key component of IBM’s CAMSS (Cloud, Analytics, Mobile, Social, Security) strategy for transforming the corporation, but if cloud profit margins are actually lower than those of IBM overall they will tend to drag earnings down, not boost them up.
AWS proves Cloud by itself is like the PC business -- high volume, low margin. And it’s a high investment business into which Amazon poured almost $5 billion during a time when IBM was crowing about its own $1 billion cloud budget.
This means IBM can’t count on the cloud to directly increase profitability, which is exactly what the company predicted on IBM’s earnings call two weeks ago. Uh-oh.
But for companies like IBM there’s more to cloud computing than just servers and bandwidth, and this is where some see IBM’s salvation. IBM, Oracle, HP, etc. do cloud computing because there is a market for it and they can sell other more profitable services with it. They try to make their money on the other services.
Some conveniently forget that IBM struggled and stumbled badly for its first several years in the cloud business, which it ran under a variety of names. Big Blue struggled with cloud because -- like the PC -- when dealing with a low margin business IBM does not know how to operate in that mode. Buying Softlayer got IBM back into the cloud business. Softlayer, not IBM, knew how to do cloud computing.
One way you make more money in a low margin business is to sell more profitable options and services. In the case of the PC it was software. For the cloud it is applications and services. In this context I am defining an application as a collection of software products put together and configured to provide a business function. Anyone can buy a computer and an accounting package. An application in this context is being able to buy the accounting as a ready to go service -- Software as a Service, or SaaS.
The other way to make money is to sell support services with the cloud. Of all the cloud providers IBM is best positioned to sell support services with its cloud. In fact if you look at IBM’s recent cloud signings, services is a big part of them. While IBM is best positioned to sell services with its cloud, it is simultaneously gutting its services division. This is an excellent example of how IBM’s short term and long term goals are in horrible conflict. By gutting services, IBM is upsetting customers and damaging its ability to sell its products and services.
In the area of SaaS IBM’s business software portfolio is very weak. In the 1990’s IBM had a huge software portfolio then squandered it in the early 2000’s when then-CEO Sam Palmisano chose to maximize shareholder value and killed most software projects. Sam apparently figured it would be more profitable to ship software maintenance offshore and to acquire new software products instead of developing them. The problem was Sam didn’t understand the future importance of SaaS. IBM’s big On Demand business was proof of that: it had no software. Today, for example, IBM doesn’t have accounting software it can build into SaaS.
For IBM to become a strong player in the cloud SaaS market I think it needs to make a deal with the devil -- Oracle. IBM should negotiate a licensing and marketing agreement with Oracle to host Oracle business products on IBM’s cloud. Both companies would then market the new software as a service.
Neither company, of course, will do this.
IBM is beginning to realize the importance of services to its cloud business. Right now IBM doesn’t have much in the way of cloud services to sell beyond Softlayer and the horribly dated Websphere. But IBM recently announced its Hybrid Cloud. There is more to this than meets the eye. If IBM is successful it will be able to provide support services to customers using anyone’s cloud. AWS can make its 16.9 percent from the platform (cloud infrastructure -- the lower margin bits) and IBM may be able to make 30 percent from support services.
But a big cloud support win ought to require good tools and IBM lacks those. It has to get past the mindset of billable hours (the longer something takes to accomplish the better at IBM) to actually fulfilling customer requirements. This will take better cloud tools like those of Adobe Systems. It only takes a few mouse clicks with Adobe/AWS to set up a new web service -- a simple task that still takes weeks for IBM with Websphere. The Adobe tools are much, much, much better than IBM’s.
IBM should clearly partner with Adobe (AWS is already). But is anyone at IBM talking to Adobe? Does anyone know IBM should be talking to Adobe? I doubt it. Why does IBM need Adobe, they’ll ask, when they already have Websphere?
I am not making this up.
Image Credit: Brian A Jackson / Shutterstock
Yesterday was Tax Day in the United States, when we file our federal income tax returns. This has been an odd tax season in America for reasons that aren’t at all clear, but I am developing a theory that cybersecurity failures may shortly bring certain aspects of the U.S. economy to its knees.
I have been writing about data security and hacking and malware and identity theft since the late 1990s. It is a raft of problems that taken together amount to tens of billions of dollars each year in lost funds, defensive IT spending, and law enforcement expenditures. Now with a 2014 U.S. Gross Domestic Product of $17.42 trillion, a few tens of billions are an annoyance at most. Say the total hit is $50 billion per year, well that’s just under three tenths of one percent. If the hit is $100 billion that’s still under one percent. These kinds of numbers are why we tolerate such crimes.
One summer when I was in college I worked in the display department of a Sears store, helping a Latvian carpenter named Joe Deliba. When we needed more nails Joe sent me to take them from the hardware department. We stole as many of our materials as we could from the store, which chalked the losses up to shoplifting even though they were really going into a new display in Ladies Dresses. The store expected a certain level of losses, I was told, and as long as they stayed under about five percent it didn’t matter. I suspect that five percent number shows up in a lot of financial statements at places like banks and credit card companies where it is considered just a cost of doing business.
When PayPal was getting started back in the Peter Thiel and Max Levchin era the company had to absorb a significant amount of theft losses as they figured out their payment business, which ultimately came to be a huge security software suite with some money attached. At one point I was told PayPal had absorbed $100 million in losses, which for a company bankrolled by Sand Hill Road is a lot of moolah. But they figured it out and made it through.
The question I have today is whether we as a nation are at risk of not figuring it out or not making it through?
The past 12 months have been brutal in terms of personal and corporate financial information losses in America. There have been so many hacking cases -- from Anthem to Target and a hundred others in between -- that their names no longer matter. What matters to me is in the past year I have had to replace half a dozen credit or debit cards and received four offers of identity theft protection services paid for by affected companies or government agencies. Government agencies!!!
Now factor into this what we’ve learned so far from Edward Snowden -- that our own government also takes our information and their methods of controlling access to it are pretty pathetic.
We know from all these hacking cases a lot more than we used to about when and how our data can be taken. We still don’t know much about the extent of actual financial damage because to date it’s been beneath that five percent limit set down at the Sears store. Banks are presumably losing billions every year but that’s okay, I guess, because they are making even more billions. It makes me wonder, though, how easy it might be to say something was a theft when it’s really some banker’s second home in the Hamptons. It’s just a thought.
There is definitely something going on that’s different this year. It’s not just the increased number of ID thefts reported (two million I’m told -- just since February 15th!).
The Sony hack showed the sophistication of these attacks is well beyond the technical skills of most companies and government agencies. Cyber criminals can purchase the code and assistance they need over the Internet. The currency of choice is Bitcoin, because it is anonymous. These are significant -- and disturbing -- changes. But there’s more.
I get an inkling of it in my own dealings with the Internal Revenue Service. It’s not just that Congress has so cut the IRS budget that they can’t effectively enforce the tax laws anymore: I think the game has changed or started to change and the feds are scared shitless as a result. Here’s the least of it from a credible source: "It now seems very possible they stole data directly from the IRS and/or Social Security Administration. This attack appears to be huge. We could all be getting new tax ID numbers this year and next year we may all be filing our taxes by mail again".
But wait, there’s even more! The traditional cyber theft mechanisms are hacking the system to steal minute amounts from many transactions; using identity theft to get false credit cards or file bogus tax returns with refunds, or; gaining account numbers and passwords and simply draining bank accounts. The techniques for all these are well known and the loss thresholds have evidently been acceptable to the government and the financial system -- again below five percent.
It’s simply too difficult to do enough of these thefts to exceed five percent before being detected and shut down. And so the system has long had an awkward equilibrium.
Willie Sutton, the famous bank robber, said he robbed banks because "that’s where the money is". For the most part cyber theft to this point hasn’t been where the money is. It has involved relatively complex frauds involving not very big amounts of money.
What if that has somehow changed?
One fear I heard expressed many times last year was that this year we’d see a tsunami of fraudulent tax returns in January, but the IRS claims that hasn’t happened. But something else has happened, I assure, you, because people I talk to in this area on a pretty regular basis are suddenly even more paranoid than usual.
At this point certain readers will come to the conclusion that I don’t know what’s happening, that possibly nothing is happening, that I’ve jumped the shark and it’s time to stop reading old Cringely. Maybe so. But all that I can say in defense is that Snowden showed we have an extensive and fairly incompetent cyber security bureaucracy dedicated as much to keeping us in the dark as keeping us safe as a people. If something were going terribly wrong -- if something is going terribly wrong -- would they tell us?
Forget about bad tax returns and fake credit cards. What if what’s been compromised are the real keys to the kingdom -- literally the accounting records of banks, sovereign funds, and even governments? A criminal could steal money, I suppose, or they could simply threaten to destroy the accounting data as it stands, casting into doubts all claims of wealth. What makes Bill Gates richer than you or me, after all, but some database entries?
I have reason to believe that the game has been compromised and significant change has to follow. Whatever tools we use today to determine who owns what and owes what are probably in danger which means new tools are coming. And with those new tools the financial system and the financial regulatory system and the data security system will probably change overnight.
I tell you it’s happening. I’m sure there are readers here who know about this. Please speak up.
My friend Andy Regitsky, whom I have known for more than 30 years, follows the FCC, blogs about them, and teaches courses on -- among other things -- how to read and understand their confusing orders. Andy knows more about the FCC than most of the people who work there and Andy says the new Net Neutrality order will probably not stand. I wonder if it was even meant to?
You can read Andy’s post here. He doesn’t specifically disagree with my analysis from a few days ago, but goes further to show some very specific legal and procedural problems with the order that could lead to it being killed in court or made moot by new legislation. It’s compelling: Andy is probably right.
I’m not into conspiracy theories, but this Net Neutrality situation suggests a strong one. Let me run it by you:
1 -- The new FCC Chairman, Tom Wheeler, comes from the cable TV and wireless industries where he worked as a top lobbyist. He’s a cable guy.
2 -- Wheeler proposes the exact sort of Net Neutrality rules we might expect from a cable guy, keeping the Internet in Title I of the Communications Act as an Information Service and allowing ISPs to sell fast lanes to big bandwidth hogs like Netflix.
3 -- The big ISPs, having got a lot of what they wanted, still smell blood, so they take the FCC to court where much of the order is struck down -- enough for the FCC to either back down or rewrite. Wheeler decides to rewrite.
4 -- Somewhere in there comes a phone call to Wheeler from President Obama and suddenly the former cable guy becomes a populist firebrand, calling for Internet regulation under Title II, just as Verizon threatened/suggested in court.
5 -- The new order is exactly the opposite of what the big ISPs wanted and thought they might get. It’s Armageddon to them. What are they going to do? Why sue of course!
6 -- The new order is seriously flawed as Andy points out. It’s a mess. But at this point it’s also the law and if life is going to get back to something like normal all sides are going to have to come together and agree on how to move forward. Verizon or some other big ISP can sue and get changes, but will they get the right changes? They didn’t the last time.
7 -- The better solution is for Congress to change the current law or write a new one. But this is a Congress that’s against the President, though maybe not solidly enough to override a veto.
8 -- So the big ISPs have their lobbyists lean on Congress to write such an Internet law but make it one that won’t be vetoed. The Internet goes back under Title I as an Information Service but Net Neutrality is codified and maybe even strengthened. President Obama gets the law he wanted all along but couldn’t rely on his party to produce.
Can this have been the point all along?
The Indiana Legislature is in the news for passing a state law considered by many to be anti-gay. It reminded me of the famous Pi Bill -- Bill #246 of the 1897 Indiana General Assembly. There’s a good account of the bill on Wikipedia, but the short story is a doctor and amateur mathematician wanted the state to codify his particular method of squaring the circle, a side effect of which would be officially declaring the value of π to be 3.2.
The bill was written by Representative Taylor I. Record, sent to the Education Committee where it passed, went back to the Indiana House of Representatives where it again passed, unopposed. Then the bill went to the Indiana Senate where Professor C.A. Waldo of the Indiana Academy of Science (now Purdue University) happened to be visiting that day to do a little lobbying for his school. Professor Waldo explained to the Senators the legislative dilemma they faced.
Then, according to an Indianapolis News article of February 13, 1897:
…the bill was brought up and made fun of. The Senators made bad puns about it, ridiculed it and laughed over it. The fun lasted half an hour. Senator Hubbell said that it was not meet for the Senate, which was costing the State $250 a day, to waste its time in such frivolity. He said that in reading the leading newspapers of Chicago and the East, he found that the Indiana State Legislature had laid itself open to ridicule by the action already taken on the bill. He thought consideration of such a proposition was not dignified or worthy of the Senate. He moved the indefinite postponement of the bill, and the motion carried.
Given that the bill was postponed, not defeated, I suppose it could be revived and passed today. Almost anything is possible in Indiana.
Speaking of the law, a number of readers have asked me to comment on the Ellen Pao v. Kleiner Perkins gender discrimination case that was resolved last week in San Francisco. I don’t have much to say except that I think the verdict was the right one. This is not to say that the VC industry doesn’t have problems in this area but that Kleiner Perkins is probably the best of the bunch and therefore was the worst possible target.
That’s my logical reaction. My emotional reaction is quite different because I spent all last week as part of a large jury pool in Sonoma County Superior Court for a case of statutory rape. The rape victim was a middle schooler who had two babies (two pregnancies) with a man probably 30 years her senior. The girl’s mother was charged as an accomplice. What this has to do with the Pao case is I could catch on my phone breathless live blogs of the Pao courtroom play-by-play from both the San Jose Mercury and Re/Code, yet the Santa Rosa rape case has yet to be noticed by any news media anywhere, I guess until now. In this era of supposed news saturation and citizen journalism that’s not the way it’s supposed to be.
I was on the jury for about five minutes before I was dismissed by the prosecution.
Finally, a revised and somewhat updated version of my IBM book is out now in Japanese! Here (in English) is the Preface for the Japanese Edition:
Why should Japanese readers care about the inner workings of IBM? They should care because IBM is a huge information technology supplier to Japanese industry and government. They should care because IBM Japan is a large employer with thousands of Japanese workers, most of them highly-paid. They should care because, of the American multinationals, IBM has always been the most like a Japanese company with strong corporate discipline and lifetime employment. Only IBM chose Japanese managers to run its subsidiary, making it not just IBM’s office in Japan but also Japan’s office at IBM.
But times have changed. IBM no longer offers its workers lifetime employment. And many other aspects of the company have changed, too. Some of these changes have been in response to economic forces probably beyond IBM’s control, but others can be traced directly to IBM management abandoning the principles under which the company was run for many decades.
IBM is a very different company today from what it was 10 or 20 years ago. That’s what this book is all about — how IBM has changed and why. It is not a happy story but it is an important one, because IBM is a bellwether for all its peers — peers that include big Japanese companies, too.
IBM has lost its way. This book explains how that happened and why. And while IBM is nominally an American company, the impact of this change is felt everywhere IBM does business, including Japan.
I’ve been hesitant to comment on the FCC’s proposed Net Neutrality rules until I could read them. You’ll recall the actual rules weren’t released at the time of the vote a couple weeks ago, just characterized this way and that for the press pending the eventual release of the actual order. Well it finally published the rules last week and I’ve since made my way through all 400+ pages (no executive summary commenting for me). And while there are no big surprises -- much less smoking guns -- in the FCC report, I think that taken along with this week’s Wall Street Journal story about an Apple over-the-top (OTT) video service the trend is clear that the days of traditional cable TV are numbered.
What booms through the FCC document is how much it’s written in response to the Commission’s loss last year in Verizon Communications Inc. v. FCC. Most of the more than 1,000 footnotes in the order refer to the legal defeat and place the FCC’s current position in that legal context. FCC lawyers have this time really done their homework, suggesting that it will be difficult for cable interests to win like they did last year.
Just to review, last year’s version of net neutrality from Commission Chairman Tom Wheeler wasn’t especially neutral at all, proposing a two-tiered system that would have allowed ISPs to sell fast-lane service to OTT video streamers like Netflix. But that still wasn’t good enough for Verizon, which sued for even more. A key component of Verizon’s argument in 2014 was that the Commission had no legal basis for regulating fast-lanes at all with the Internet defined as an information service. If the FCC wanted to regulate fast lanes, Verizon argued, it’d have to claim the Internet was a telecommunication service regulated under Title II of the Communications Act of 1934. One can only guess Verizon lawyers felt the FCC would be reluctant to open up the legal can of worms of Title II regulation, which might have had the FCC approving your ISP bill and maybe throwing-in a fee or two.
But the Verizon lawyers, like pretty much everyone else, didn’t include in their reckoning the influence of HBO’s John Oliver. Just as Verizon Communications Inc. v. FCC was winding its way through the courts, Oliver’s Last Week Tonight program did a 13-minute segment explaining Net Neutrality and the FCC’s then-proposed two-tier system (referred to as "Cable Company Fuckery" -- is fuckery even a word? My word processing program says it isn’t). Oliver ended with a stirring call for viewers to send comments to the FCC and, lo, four million HBO subscribers and video pirates did just that.
So the FCC, having lost in both the law courts and the court of public opinion, embraced a version of Title II Internet regulation in the current order. It did just as Verizon had suggested, but of course Verizon wasn’t really suggesting anything of the sort and was just using Title II as a weapon.
The new rule specifically prohibits paid fast lanes and may well undermine a number of ISP deals last year with Netflix in which the streamer agreed to pay special fees for uninhibited access to users. It’s important to note that these Netflix deals may still continue if the payments are viewed primarily as being for co-location of video servers, not peering. The distinction here between co-location and peering is important. Co-location charges are legal under the new order while peering charges are not.
In my view what we have at work here are conflicting business models and visions of the future of TV. It has been clear for a long time that cable TV service as we’ve known it since the Cable Act of 1992 is changing. Back in 1992 Internet service hardly mattered while today cable ISPs make more money from Internet than they do from TV. Last year’s proposed two-tier service seemed to support the idea that cable ISPs could eventually replace subscriber channel package fees with OTT peering charges. Let a thousand OTT networks bloom as cable companies eventually became schleppers of bits, maybe dropping their own video services entirely, opting for less revenue but more profit thanks to OTT peering fees. But this wasn’t enough for Verizon, which pushed even harder -- too hard in fact.
In the long run I think the future will be the same. Eventually the cable ISPs will become mainly ISPs and some version of OTT co-location will become a major profit center. This is the business loophole that will eventually -- and properly -- emerge. Ironically it argues against the current trend of massive video data centers since the video servers will be geographically dispersed to cable head-ends. But it makes great sense from a network management standpoint since it means only one copy of every TV show need be sent over the Internet backbone to each cable system, not one copy per viewer.
Now to Apple’s video ambitions. Dish and Sony are already rolling-out OTT video services with no doubt many more to follow. Nobody is yet offering all the right pieces but it will eventually happen. Apple’s entry into the OTT streaming business, first with HBO Now and shortly with many more networks and channels, is significant not only because it validates the whole OTT market segment, but because Apple has so much darned money -- more than $180 billion in cash.
Having so much cash means that Apple can afford to wreak a lot of havoc in television with very little risk to its core business. It can take OTT further because making money with such a relatively small business isn’t that important. It can easily do radical things like (this is just an idea, not a prediction) buying-up the services of every member of the Writers Guild of America, inserting Apple deeply and inextricably into the Hollywood creative process. Apple can do pretty much whatever it pleases and there’s not much any other company can really do about it, which is why we’ll over the next couple years see dramatic changes throughout cable TV.
And while the current FCC Net Neutrality actions probably help Apple (and Netflix, and most likely you and me, too), these changes have been coming for a long time.
If you have an entrepreneurial bent it’s hard not to see an opportunity to start the next big cloud storage company in last week’s Nearline Storage announcement by Google. I saw it immediately. So did Google make a big pricing mistake? Probably not.
Nearline storage usually means files stored on tapes in automated libraries. You ask for the file and a robot arm loads the tape giving you access to your data in a couple minutes. Google’s version of nearline storage is way faster, promising file access in three seconds or less. It doesn't say how it works but it makes sense to imagine the data is stored on disks that are powered-down to save energy. When you ask for the file they spin-up the disk and give it to you.
Google Nearline costs 40 percent (not the four percent my ancient eyes originally saw) of the online storage price.
The chance here for storage price arbitrage screams out (or did, I guess, so kick me). Start your own cloud storage business for large files (video is perfect) based on Goggle Nearline. Keep a giant File Allocation Table (FAT) of nearline files in RAM on your web storage server. This is exactly how Novell revolutionized the LAN file server business in the 1980s with its Indexed Turbo FAT speeding file access by 1000X. Hold the most recently viewed files on a RAID array that also includes the first few megabytes (three seconds worth) of all other customer files. That way when the file request comes in the location’s already in RAM and the first few seconds can be delivered from RAID then the nearline data kicks in smoothly.
The result would be a cloud storage system for large files that’s as fast as any other -- as fast as Google -- for maybe 10 percent the cost per bit. I’d bet that’s exactly how Google manages YouTube files.
With such an obvious opportunity to build a business at their expense, why didn’t Google price Nearline higher? Maybe it has patents to cover some of this and thinks it can keep competitors like me at bay. But more likely it just wants Nearline to succeed and making it darned cheap should do that. In fact this could in the long run be the death of tape storage.
However a very smart friend of mine sees the market differently. He thinks cloud storage price competition is going away and the price per bit at every cloud storage vendor will shortly be the same -- zero. They’ll give sway the storage space and make money instead on the extras -- security, encryption, redundancy, geographic distribution, and integration with many operating and file systems.
Having given it some thought I agree. I’d go even further to suggest that some consumer cloud storage companies don’t care about the money at all. Their game is to be acquired so what really matters is the size of their user base and the best price for getting lots of users is free.
These business lessons were learned 20-30 years ago in the storage locker business. I have some of my meager retirement money invested in limited partnerships at Public Storage Inc., the national storage chain. That business is financed with hundreds of such limited partnerships. The PSI value proposition for limited partners is interesting. They build in seedier (cheaper) parts of town. They are able to charge rents equivalent to apartments on a per-square-foot basis without requiring the amenities or management labor of apartments. Even better, they have your stuff, so losses for deadbeats are near zero. If you don’t pay the rent they just auction your stuff. All very interesting, but not the major point. PSI makes its real money by building or buying storage units in the path of growth. They think 20 years ahead to what that seedy neighborhood will eventually become. Sometimes they build structures that can be converted to specific other uses when they are sold, increasing their value in a growing market. The actual storage business only has to run at break-even with the big bucks coming when the property is redeveloped or sold.
Now consider how this model might apply to cloud storage. They have your stuff, making most customers long-term captives. The value here lies in the customer relationship. The cost can go to zero because the real money comes from ultimately selling the customer relationship to a larger company. This makes every cloud storage company either an acquisition candidate (most of them) or a company that intends to become enormous then realize that customer value through selling value-added services down the line. It’s grow or die.
So the actual cost of providing cloud storage isn’t going to zero. Disk drives and bandwidth will continue to cost money. That’s just the cost of being (temporarily) in business.
Update: Reader Justin Fischer points out my math is off. Interestingly I wasn’t the only reporter to make this mistake. Sorry.
I’m an older guy with younger kids so to some extent I live vicariously through my friends, many of whom have children who are now entering the work force and some of those children can’t find jobs. We’re not in a recession, the economy is expanding, new positions are supposedly being added every day, but the sons and daughters of my friends aren’t generally getting those jobs so they are staying in school or going back to school, joining the Peace Corps., whatever. Everyone is rattled by this. Kids don’t want to move home and parents don’t want to have them move home. Student debt continues to increase. Everyone wants to get on with the lives they thought they were promised -- the lives they’d signed up for and earned.
What changed?
Everything changed. Well everything except our personal perspectives, I suppose, which is why we’re so surprised to be where we are today. And to a large extent something else that didn’t change was the perspective of those people we count on, or think we can count on, to keep us out of trouble as a society.
This came to mind over the weekend when I watched an interview with President Obama by Kara Swisher of Re/Code and formerly with the Wall Street Journal. Obama had come to Silicon Valley for this and that including meeting with some Stanford students and, of course, with Kara Swisher. He’s a very smooth guy. I’d like to interview him someday, but I’m afraid he hasn’t a clue, really, about how technology and society actually work together.
The part of the interview that stood out for me was when President Obama talked about how we all need to learn to code. The video’s embedded above though you have to get about 19 minutes in to reach the tech part. President Obama’s two daughters are into coding, maybe a bit late but still into it, he said, and so should be every other American below a certain age and maybe above that age, too.
It’s computer literacy all over again.
About 20 years ago, for those who remember, we were all very concerned about what was called computer literacy, which was supposed to mean learning how to code. This was shortly after Al Gore had torn from the grasp of Bob Taylor and Bob Kahn the label of having invented the Internet. Personal computers were being sold by the ton thanks to $400 rebates from Microsoft, and just as Gordon Moore and Bob Noyce could only imagine a home computer being used for recipe storage (and so Intel passed on inventing the PC) the policy wonks of the early 1990s thought we all ought to learn to write software because that’s what computers are for, right?.
It didn’t happen of course and thank God for that. Imagine a nation of 350 million hackers.
The Wisdom of Crowds rightly interpreted computer literacy’s concept of learning to code as learning to use a computer. It was simple as that. Along came the Internet and the World Wide Web and we were off on a decade-long burst of improved productivity thanks to the new box on all our desks. Apps got more useful and browsers appeared and, yes, we could build our own web sites but that wasn’t coding, it was just using a menu-based app and creating for awhile 20-something millionaires along with our MySpace pages.
If this time is different, it’s because we’re being scared, rather than enticed, into coding. Malia and Natasha are supposed to be doing this to fight the STEM worker shortage that doesn’t actually exist and to justify immigration reforms, some of which (unlimited H1-Bs) will only hurt our economy. Don’t forget those kids in Finland who learn math better than we do, somehow without continual standardized testing and those German kids, too, who beat our asses while studying for only half a day right up to High School. Let’s all learn to code!
It’s time to be afraid, we’re told. Be very afraid. Every policy these days seems to be based in some way on the politics of fear.
I saw an ad the other day for a local college. Their pitch was "over 95 percent of our students are able to find work within a year of graduation". A YEAR!!! I’m sure many of those graduates didn’t get the jobs they really wanted, either.
Here’s the problem. Smart kids can go to good schools, get a good education, go to college, get degrees and still unable to find work in their community or their chosen field. I have a friend who has two college educated kids who can’t find work. They have started a family retail business to help their kids make a living.
An alarming number of engineering graduates can’t find work. These students have taken all the important STEM classes society says are important. They have strong math and science skills… and they can’t find work.
We’ve been shipping millions of jobs offshore for years -- lots of manufacturing jobs and the engineering and technical jobs needed to support them. We can improve education and double the number of college graduates who have math, science, and engineering degrees -- and most of those will still be unable to find work. While education can be better, sure, the real problem is demand has dried-up.
US workers cost too much, we’re told. Benefit and entitlement costs per employee have gone through the roof. The biggest offender is of course health care. And what big industry is hiring the most people? Healthcare! What happens when the market and economy can no longer bear the cost of healthcare? That will be hit with serious cost cuts too.
Today if you want to steer your kids to a career with good employment -- it would be healthcare. However when they are middle aged they’ll probably have to find new work, too. As more hospitals and medical practices are combined and consolidated there will be an increasing focus on revenue generation. Obviously this is in direct conflict with the market’s need to reduce the cost of healthcare. Something’s gonna give.
The intrinsic problem here is parasites, but what in our present society constitutes a parasite? Tony Soprano -- one of my go-to entrepreneurial role models -- was a parasite in that his success required a healthy host and Tony knew it. You can only steal so much before the host is compromised or even killed. If Tony was going to steal or extort or otherwise illegally take money out of the economy, well he wanted that money to be backed by the full faith and credit of the US government -- by a healthy host. Same for the banking and mortgage crisis of 2008 where the bankers took more and more until the host they were sucking dry -- the American homeowner -- could no longer both pay and survive. Tony Soprano was smarter than the bankers.
As individuals -- and even as a society -- our greatest opportunity lies these days in entrepreneurism, in creating like my friends did a startup -- a family business. We need jobs and startups are where most of the new jobs come from. What used to be freedom from a big employer and a chance to follow your dream is now more of a safety net for the creative unemployed. It’s more important than ever.
And I’m far from the only one to realize this. My PBS project I mentioned a few days ago is Startup America, a TV series that began with my Not in Silicon Valley Startup Tour back in 2010. We’ve been following 32 tech startups for five years and will present some of their stories thanks to WNET, PBS, and Salesforce.com. A lot of these companies fail of course, yet still they are inspiring stories that teach great lessons and give me real hope for the future of our economy and our country. It’s the opposite of fear. It’s the politics of optimism and hope.
I haven’t been writing as much lately. This has been for several reasons, some of which may surprise you. It’s true I’ve had to spend a lot of time fending-off attacks from IBM corporate (more on that below) but I’ve mainly been at work on two secret projects. One is a new documentary series for PBS and the other a new technology startup I’m doing with a partner. The PBS series will be announced when PBS decides to announce it but most of the shooting is already done. The startup has taken the traditional VC route and looks, surprisingly, like it will actually be funded. Evidently if your idea is wild enough and your partner is smart enough it’s still possible for an idiot (that would be me) to make it in Silicon Valley. This project, too, will be announced when the money is dry, hopefully in a week or two.
In the meantime I’ve been working on several columns. One of these, about Yahoo, has been especially frustrating. There was a time when companies actually wanted reporters to write about them, but I guess those days are past. I e-mailed Yahoo corporate communications (twice) and have yet to hear back from media@yahoo-inc.com. I called (again twice) the number they give on Yahoo press releases, leaving messages both times but have yet to get a call back from 408-349-4040. For awhile the Yahoo press site was completely down.
So if you work at Yahoo please ask someone in the press office to give me a call at 707-525-9519.
Back at boring old IBM, heads continue to roll in the current reorg. I don’t want to waste a whole column on this but stories reach me every day showing both the mean spirit and delusion that seem to be the dominant themes these days in Armonk. I’ll give you two examples here, the first being mean spirit:
In June of 2011 IBM gave every employee seven shares (about $1000) worth of restricted stock options, a gift to the workforce on IBM’s 100th Anniversary. Normally the vesting period for such options would be four years. But IBM in this case moved the vesting date back to December, 2015 (4.5 years). It is now like the 401K employee contribution match but unlike any IBM option program I know. So if you’ll still be on the IBM payroll in June but won’t be in December, as looks to be the case for tens of thousands of IBMers, the company gets to keep your money.
And now delusion:
Here’s a VentureBeat story: IBM makes a big bet on software-defined storage and hybrid clouds. It’s a good idea but IBM, being IBM, is doing it exactly the wrong way.
Where IBM has it absolutely right is that managing storage between a data center and a cloud service can be tricky. The cloud folks use very different technology making this an area where most cloud services are not quite ready. So this announcement from IBM could be exactly the right idea. Its XIV technology is very good and if it can be applied to this challenge it could be a big win with corporations wanting to move to the cloud.
But there are two things about this announcement that bother me -- five years and $1 billion.
If IBM takes more than six months to produce its first iteration of this service it will be too little too late. In the cloud world, especially given IBM is years late to that market, IBM needs to move at Internet speed.
Also the $1 billion investment is way too high. Even IBM should be able to do this product for $50 million. I think the truth here is that $1 billion is a marketing number intended to impress big customers, to pre-sell some expensive XIV systems, and to keep those big customers waiting years until the technology -- by then completely obsolete -- is finished.
Finally, pity the poor IEEE, which picked-up information from my Forbes column only to be almost crushed by a negative reaction from IBM. I give the IEEE a lot of credit here for sticking more or less to its (in this case our) guns, but the point I want to make has to do with the varying burden of proof in the case of such business stories.
IBM of course hates me and has worked hard to discredit my work. Yet there is an interesting disparity in how the burden of journalistic proof is being applied. IBM says I am wrong yet consistently won’t say what’s right. Exactly how many employees have been RA’d so far in 2015? How many have been fired outright? How many have been pushed into early retirements by being rated a 3 for the first time in their long careers? And how many of these affected people are age 50 and over? IBM refuses to give any of this information.
If stories are pro-IBM, anyone can write anything without substantiation. If they are anti-IBM, then it has to be wrong. IBM can say it’s wrong, get ugly about it, and not provide any proof of what it says. Ironically, this is exactly the opposite of mainstream news, where the more negative the news, the fewer details seem to be needed.
With Radio Shack having declared Chapter 11 bankruptcy, with hundreds of stores closing and others possibly becoming Sprint locations, let’s take a moment to look back at the important contributions the company made in the early days of personal computing.
Charles Tandy started the Tandy Leather Company which opened hundreds of little shops in the 1950s selling kits for consumers to make their own tooled leather belts, for example. I made one in 1959, burning my name into the belt with a soldering iron. As leather craft faded as a hobby and electronics boomed many of those Tandy Leather stores became Radio Shacks (but not all -- a few leather stores survive even today). Radio Shack stores always had the advantage of proximity balanced by higher prices. If you needed a part or two you drove down to Radio Shack but if you had a bunch of electronic parts to buy there was generally some cheaper store across town.
Even today you can buy a $35 Raspberry Pi computer from the Raspberry Pi Foundation or a $121 Raspberry Pi kit from Radio Shack. No wonder the Shack is in trouble.
Just as Jack Tramiel’s Commodore rode the 1970’s handheld calculator boom, Radio Shack rode that decade’s even bigger CB Radio boom. But as each boom faded the two companies had to find the Next Big Thing, so they both turned to personal computers.
Radio Shack found its salvation in Steve Leininger, an employee at Paul Terrell’s Byte Shop in Mountain View where the original Apple 1s were sold. Leininger designed the TRS-80 that was to become, from 1978-82, the fastest selling computer in the world -- bigger at the time than even the Apple II. The barebones TRS-80 cost $199.95 in a pretty much unusable form and about $1800 completely tricked out. The TRS-80 was fabulously successful.
Not so successful but equally important to computer history was Radio Shack’s answer to the IBM PC, the Model 2000, which appeared in the fall of 1983. The Model 2000 was intended to beat the IBM PC with twice the speed, more storage, and higher-resolution graphics. The trick was its more powerful processor, the Intel 80186, which could run rings around IBM’s old 8088.
Because Tandy had its own distribution through 5,000 Radio Shack stores and through a chain of Tandy Computer Centers, the company thought for a long time that it was somehow immune to the influence of the IBM PC standard. The TRS-80, after all, was a proprietary design and a huge success. Tandy thought of their trusty Radio Shack customers as Albanians who would loyally shop at the Albanian Computer Store, no matter what was happening in the rest of the world. Alas, it was not to be so.
Bill Gates was a strong believer in the Model 2000 because it was the only mass market personal computer powerful enough to run new software from Microsoft called Windows without being embarrassingly slow. For Windows to succeed, Bill Gates needed a computer like the Model 2000 available everywhere. So young Bill, who handled the Tandy account himself, predicted that the computer would be a grand success -- something the boys and girls at Tandy HQ in Fort Worth wanted badly to hear. And Gates made a public endorsement of the Model 2000, hoping to sway customers and promote Windows as well.
Still, the Model 2000 failed miserably. Nobody gave a damn about Windows 1, which didn’t appear until 1985, and even then didn’t work well. The computer wasn’t hardware compatible with IBM. It wasn’t very software compatible with IBM either, and the most popular IBM PC programs -- the ones that talked directly to the PC’s memory and so worked faster than those that allowed the operating system to do the talking for them -- wouldn’t work at all. Even the signals from the keyboard were different from IBM’s, which drove software developers crazy and was one of the reasons that only a handful of software houses produced 2000-specific versions of their products.
Today the Model 2000 is considered the magnum opus of Radio Shack marketing failures. Worse, a Radio Shack computer buyer in his last days with the company for some reason ordered 20,000 more of the systems built even when it was apparent they weren’t selling. Tandy eventually sold 5,000 of those systems to itself, placing one in each Radio Shack store to track inventory. Some leftover Model 2000s were still in the warehouse in early 1990, almost seven years later.
Still, the Model 2000’s failure was Bill Gates’s gain. Windows 1 was a failure, but the head of Radio Shack’s computer division, Jon Shirley, the very guy who’d been duped by Bill Gates into doing the Model 2000 in the first place, sensed that his position in Fort Worth was in danger and joined Microsoft as president in 1983.
While Radio Shack still sells computers (at least for a few weeks longer) the company’s computer heyday peaked around 1980 -- 35 years ago.
Mobile phones offered the company a lifeline over the past decade but the positioning was all wrong. Those 1000+ Radio Shack stores that are rumored to be going to Sprint, for example, will represent the primary point of retail contact between the wireless carrier and subscribers valued by Wall Street at more than $2000 each. Sprint has far more to gain from each Radio Shack location it takes over than Tandy could ever have hoped to get from selling us batteries and speaker wire.
Where Radio Shack could no longer operate a profitable enough business in the age of Amazon Prime, for Sprint there’s still real value in neighborhood locations.
gad·fly (ˈɡadˌflī/) noun. 1. a fly that bites livestock, especially a horsefly, warble fly, or botfly. 2. an annoying person, especially one who provokes others into action by criticism.
Sometimes being a gadfly is exactly what’s required. That’s certainly the case with IBM and has been for the almost eight years I’ve been following this depressing story. Gadflies came up because IBM finally reacted today to my last column predicting a massive force reduction this week. They denied it, of course -- not the workforce reduction but its size, saying there won’t be even close to 110,000 workers laid off -- and they called me a gadfly, which was apparently intended as criticism, but I’m rather proud of it.
So what’s the truth about these job cuts? Well we’ll know this week because I hear the notices are already in transit to be delivered on Wednesday. (I originally wrote in the mail but then realized IBM would condemn me if they are coming by FedEx, instead.)
I think IBM is dissembling, fixating on the term 110,000 layoffs, which by the way I never used. Like my young sons who never hit each other but instead push, slap, graze, or brush, IBM is playing word games to obscure the truth.
There are many ways to spin a workforce reduction and here’s how one IBM source explained this one to me just this morning:
If you are following the Endicott Alliance board (an organization of IBM workers) you know that they are only 'officially' laying off several thousand (maybe 12K I’m guessing), but others are being pushed out by being given poor performance ratings. This includes people on their 'bridge to retirement' program that took that option, thinking it kept them 'safe' from resource actions (layoffs/firings). There is a loophole that says they can be dismissed for 'performance' reasons, which is exactly why many of my long-time, devoted, hard working peers are suddenly getting the worst rating, a 3. It’s so they can be dismissed without any separation package and no hit to the RA or workforce rebalancing fund. Pure evil. The same trick allows IBM to not report to the state’s WARN act about layoffs. It used to be something like 10 percent of employees 'had' to be labeled 3’s, but recently the required number of 3’s was way, way upped according to some managers. So that’s how they are doing it… Some managers have teams of hard working people that put in tons of overtime and do everything they are asked, and by requirement some must be given 1’s, some 2+, some 2, and unfortunately some 3’s. It’s 50’s era kind of evil. They also got rid of some employees by 'stuffing' them into the Lenovo x86 acquisition, shipping tons of people over there that never even worked on x86 stuff. Lenovo has discovered this and has given some of them a way better package (year salary and benefits), and taking it up quietly with IBM.
I love this Lenovo detail, which reminds me of when Pan American Airways was failing in the early 1980s and sold its Pacific routes to United Air Lines. With that deal came 25 PanAm 747s, but before handing them over PanAm installed on the planes its oldest and most worn engines.
My further understanding of Project Chrome is IBM plans to give people notice by the 28th (Wednesday) so they will be off the books by the end of February. That timing pretty much screams that these are more than just layoffs, which could involve weeks or months of severance pay. It suggests outright firings, or offshore staff reductions, or contractors released, or strongly motivated early retirements as mentioned above. None of those are layoff’s, though there will undoubtedly be layoffs, too.
What really matters is not the terminology but how many people IBM will be paying come March 1st.
A source reputed to be from IBM today told TechCrunch "the layoff number was 10 percent of the workforce (or 43,000) and that the layoffs would be conducted in approximately 10,000 employee increments per quarter until the company righted the ship".
What if that takes 11 quarters?
For the last few months, I’ve heard that senior managers have been pleading with IBM executives not to go through with Project Chrome because it will break accounts and inevitably lead to IBM’s failure to meet contract obligations, losing customers. But that’s apparently okay.
Just don’t call them gadflies.
IBM’s big layoff-cum-reorganization called Project Chrome kicks-off next week when 26 percent of IBM employees will get calls from their managers followed by thick envelopes on their doorsteps. By the end of February all 26 percent will be gone. I’m told this has been in the planning for months and I first heard about it back in November. This biggest reorganization in IBM history is going to be a nightmare for everyone and at first I expected it to be a failure for IBM management, too. But then I thought further and I think I’ve figured it out…
I don’t think IBM management actually cares. More on this later.
IBM really does not know how to do reorganizations, which are mostly political realignments. It comes up with these ideas of how to group people. It makes a big deal about it. Then for years the new organization figures out what it’s actually supposed to be doing, how it’s supposed to be done, and it spends a lot of time fixing problems caused by the reorganization.
Here are some examples of what I mean. In the USA, mainframe and storage talent will see deep cuts. This is a bit stupid and typical for IBM. It just announced the new Z13 mainframe and hopes it will stimulate sales. Yet it will be cutting the very teams needed to help move customers from their old systems to the new Z13.
The storage cuts are likely to be short sighted too. Most cloud services use different storage technology than customers use in their data centers. This makes data replication and synchronization difficult. IBM’s cloud business needs to find a way to efficiently work well with storage systems found in customer data centers. Whacking the storage teams doesn’t help with this problem.
Meanwhile the new IBM security business has a tremendous number of open positions, are promising promotions and pay increases, etc. It is going after every security skill in the business. The collateral effect of this is most IBM services contracts will lose their security person and won’t be able to replace him/her. This will hurt a lot of contracts and put IBM in an even worse position with the customer. Creating this new business unit will be destructive to other business units and alienate existing customers. The size of the new security business is impressive. It will have to sign a lot of new business for it to break even and pay all those salaries. The giant assumption is there is that much business to be signed. In a year or two this business unit could be facing huge layoffs. This is the classic -- shoot, ready, aim.
In one of the new business units I’ve heard that everyone is going to be interviewed and will have to give a sales pitch. If you can’t sell, you’re out. Clearly IBM’s declining revenue problem is tempering the organization of this unit. This team will fix the problem by getting rid of the people who can’t sell. This is the classic treat the symptom and ignore the cause way of thinking. There are reasons why customers are buying less from IBM. Working harder to sell won’t fix those problems. If anything it will probably increase IBM’s problems with its customers.
The new cloud business is particularly troubling. This business unit is based on the assumption that cloud is the universal solution, now tell me what you need. What if my application won’t work in the cloud? There are common things used in many business systems that do not exist in cloud services, anyone’s cloud services (Amazon, Microsoft, IBM, anyone!). New IBM organizations are being built to push cloud business whether it works in a given situation or not.
IBM has a sales culture. This reorganization was designed with a sales mindset. IBM has decided what it wants to sell. It assumes its customers will want to buy it. It completely ignores the fact there are other factors involved in running a successful company.
Now to why I think there’s a good chance none of this actually matters to IBM management. Investors and analysts alike have to stop believing everything they hear from IBM. Big Blue is a master at controlling the discussion. It states or announces something, treating it as fact whether it exists or not. It builds a story around it. IBM uses this approach to control competitors, to manage customer expectations, and to conduct business on IBM’s terms.
So while IBM is supposedly transforming, it is also losing business and customers every quarter. What is it actually doing to fix this? Nothing. In saying the company is in a transition and is going to go through the biggest reorganization in its history, will this really fix a very obvious customer relationship problem? No, it won’t.
Transformation at IBM appears to me to be a smoke screen to protect management that doesn’t actually know what it is doing.
Here’s a similar view from one IBMer that came in just this morning. Notice how he/she refers to IBM in the third person:
…the only thing IBM is doing is playing its balance sheet… to show good profits and play with the amount of shares in the market… ergo manipulate EPS (earnings per share)
If you look at this you realize we have already lost the battle
IBM spent 2.5 times the amount of money on EPS manipulation than on CAPEX and overall it spends less than half of what competitors are spending on R&D
Where IBM competitors show double digit growth, IBM shows revenue decline….. so, IBM is outgunned and outsmarted… simple as that
So, only two scenarios… IBM is serious about a turn around and will try to find a new equilibrium and thereafter growth path… this means revenue will continue to decline as it changes the revenue mix before it can grow… or management does not care (or has no clue)… and will try to maximize bonus and get the hell out.
Image Credit: Tomasz Bidermann/Shutterstock
It’s time, finally, for my long-delayed 2015 predictions. Things just kept changing so fast I had to keep re-writing, but have finally stopped. 2015 will definitely be the Year of Monetization, by which I mean it’s the year when the bottom line and showing profits will become a key motivator in almost every market. And while profit -- like beer -- is generally good, it isn’t always good for everyone.
So here are my 10 predictions in no particular order.
Prediction #1 -- Everyone gets the crap scared out of them by data security problems. In many ways this was set up by 2014, a year when, between Edward Snowden and Target, America woke up to the dangers of lax data security. Where this year is somewhat different, I feel, is in the implications of these threats and how they play out. There will still be data breaches and, though there will be proposals how to retool to avoid such problems in future, I don’t see those turning into anything real before 2016. So 2015 will be the year when people claim to fix your problem but really can’t. Watch out for those crooks.
2015 will also be the year when the bad guys start to see their own profit squeeze and respond by doing exactly the things we hope they won’t. To this point, you see, the folks who steal all this information have been generally wholesaling the data to other bad guys who use the data to steal our identities and money. Only the buyers aren’t really that good at stealing our stuff so the wholesale value of a million credit card numbers has dropped significantly. So rather than finding new careers like my own favorite, opening a frozen custard stand, the guys who stole our numbers in the first place are starting to cut out the middle men and going after our stuff themselves. Given these are the really smart bad guys taking over from the not-so-smart bad guys, expect things to get bad, very bad, with billions -- billions -- in additional losses for financial institutions, retailers, and even some of us. These are the events that will finally lead -- in 2016 -- to real data security improvements.
Prediction #2 -- Google starts stealing lunch money. Even mighty Google is under pressure to show more profit, so the search giant is reaching out for new profit centers. The first hint of this came to me in a story about how Google is preparing to recommend and sell insurance. This is revolutionary. After all, the companies that were already recommending and selling insurance online are some of Google’s biggest customers. But business is business, so Google is reportedly going into competition with its former good buddies. This is, I'm sure, the beginning of a trend. Expect Google to compete this year for every lead-gen business worth doing. Insurance, mortgages, new cars, real estate -- any industry that pays well for online customer referrals will be ripe for Google, including my favorite mortgage site of all time, The Mortgage Professor, run by my personal hero, Jack Pritchard Jr.
Prediction #3 -- Google buys Twitter. There’s both push and pull in this prediction. Google needs better mojo and Twitter needs mojo at all. Not only are Google’s profits under some pressure, since buying Whatsapp Facebook has been on a tear and Google needs to respond. Then there’s the problem that Twitter doesn’t appear to actually know how to run a business. Why not solve two problems at the same time?
Prediction #4 -- Amazon finally faces reality but that has no effect on the cloud price war. Up until now Amazon has been a company singularly blessed by Wall Street with a high stock price that doesn’t seem to require high earnings to maintain. Well that has now changed. Amazon is becoming mortal in the eyes of Wall Street and so will have to cut some costs and not be quite so crazy about how much it spends entering new markets. Most of this won’t even be noticeable but it really began with the Amazon Prime price increase of several months ago. But here’s where this new parsimony won’t be seen at Amazon -- in cloud service pricing.
That’s because the cloud price war that Amazon started is no longer dependent on Amazon to continue. Google is doing its part, as are half a dozen other companies, including Microsoft, that are willing to lose money to gain customers. So cloud prices will continue to drop, the Wild West era for cloud-based startups will continue, and anyone (this means you, IBM) who thinks there will be cloud suppliers who can charge a premium because they are somehow better than the others, well that’s just cloudy thinking. If there ever is a premium cloud it won’t be big enough to matter.
Prediction #5 -- Immigration reform will finally make it through Congress and the White House and tech workers will be screwed. Remember the tech worker shortage that really doesn’t exist in the USA? Well lobbyists remember it fondly and will ride the new Republican majority to legislation that will give tech employers even more chances to bring under-paid tech workers into the country to replace domestic tech workers who are supposed not to exist. The facts are indisputable but you can lead a politician to water though you apparently can’t make him or her vote. So despite the improving and ever-more-tech-centric economy, look for no tech salary increases (again) for 2015, just more H-1Bs.
Prediction #6 -- Yahoo is decimated by activist investors. Everyone has an idea what Yahoo should do with all that money earned from the Alibaba IPO. And when it comes to activist investors, their idea is generally that Yahoo should find clever ways to simply hand over the cash to shareholders. This will happen. It’s because Yahoo can’t move fast enough to avoid it. If Marissa Mayer thinks she’ll be allowed time to thoughtfully invest that windfall, then she could shortly be out of a job. In one sense it might be better for Yahoo to just make a big stupid acquisition that makes the vultures go away, though I prefer that she turn Y! into more of a Silicon Valley venture capital empire.
Prediction #7 -- Wearables go terminal. As I shared a few days ago, the problem with wearable smart devices is they cost too much, consume too much power, and are too bulky. There’s a solution to this that’s beginning to emerge but it requires new chips that I am not sure will be available in 2015. So my even predicting this is an act of optimism. But if not 2015, then definitely by 2016.
Here’s the deal: the problem with wearables isn’t making them smart because they are typically accessories to smart phones we already have -- phones that are by themselves plenty smart enough. So let’s stop thinking of wearables as computers and start thinking of them as terminals. All that smart watch really needs is a display, a touchscreen of sorts, and Bluetooth. Smart watches should require no computational capability beyond what I just described -- no microprocessor in the sense that we’ve used that term for the last 30+ years. The archetype here is Mainframe2’s migration of PhotoShop to the cloud, sending screen images as H.264 video frames. Now replace the word cloud with the word smart phone and you’ll see where this is going. All the watch has to do is decode H.264, paint screens, and carry clicks: no OS or local processing required. Even bio-sensing is just another form of click. The result will be dumb watches that act smart, run for days, and cost less.
Prediction #8 -- IBM’s further decline. You knew this was coming, didn’t you? IBM is screwed and doesn’t even seem to realize it. There was an earlier version of this prediction that said IBM earnings would drop for the first 2-3 quarters of 2015 and then recover because of lower oil prices and the generally improving global economy. Well forget that. Even if the world does do better IBM will do worse because the low dollar IBM has relied on since the Sam Palmisano days has gone away so there’s nothing left to be made on that carry trade. IBM’s old businesses are dying faster than its new businesses can grow. The company’s only near-term hope is that the Fed keeps interest rates low, but that’s not saying much.
And speaking of saying much, my next column will be all about IBM, its new Reorg-from-Hell, and what this means to customer, employees, and investors alike.
Prediction #9 -- Where IBM leads, IBM competitors are following. When I was in college one of my part-time jobs was at Rubbermaid, a fantastic company based in Wooster, Ohio, where I went to school. Rubbermaid was run like a watch by Stan Gault, class of 1948 -- a guy who really knew how to make money. Stan eventually retired, came out of retirement to save Goodyear, then retired again and now gives away his money. But the Rubbermaid of today isn’t anything like the company I remember because it tried and failed to compete with Tyco Industries, a conglomerate run by Dennis Kozlowski. No matter how well Rubbermaid did, Tyco did better, so Wall Street rewarded Tyco and punished Rubbermaid, eventually driving the company into a merger that created a new company -- Newell Rubbermaid -- which is way more Newell than it is Rubber. The problem with this tale, of course, is that Tyco’s Dennis Kozlowski was a crook and is now doing time in federal prison. Tyco appeared to do better than Rubbermaid because Tyco lied.
Now I am not saying IBM has been lying, but I am saying that IBM has been setting an example of corporate behavior that makes little to no sense. Sales go down yet the stock price (until recently) went up! Like the Rubbermaid example, this has had some effect on IBM competitors, specifically Computer Sciences Corporation (CSC) and Hewlett Packard, both of which are having IBM-like troubles with their services businesses. And for CSC, which has no printers or PCs to sell, well services is about all there is. So look for IBM-inspired problems in these IT service operations for all of 2015.
Prediction #10 -- Still no HDTV from Apple, but it won’t matter. Here’s the thing to remember about Apple: it’s a mobile phone company! Nothing else matters. Apple can and will continue to grow in 2015 based on current products and expanding into new markets. For 2016 Apple will need something new and I predict it’ll get it. But for 2015, Apple will do just fine.
This was supposed to be time for my technology predictions for 2015, which I’ll get to yet, I promise, but first I want to explain the major trend I see, that 2015 will become known as The Year When Nothing Happened. Of course things will happen in 2015, but I think the year of truly revolutionary change will be 2016, not 2015. It takes time for trends to develop and revolutionary products to hit the market. I’d say the trends are clear, it’s the products and their manufacturers who aren’t yet identifiable.
So here are three areas where I’ll disagree with most of my peers and say I don’t expect to see much visible progress in 2015.
1 -- Data security. It will get a lot worse before it gets better. Data breaches in commerce, industry, government, and the military have become so pervasive that we’re being forced as a society into a game change, which is probably the only way these things can be handled. But game changes take time to implement and impose. The basic problems come down to identity, communication, information, and money. You can come at it from any of these directions but the only true solution is to approach them all at the same time. If we can come up with better ways to define identity, to control communication, to secure information, and to define money then it gets a lot harder to steal any of these. None of these will happen in 2015.
An expression that might have jumped out at you as odd is "define money", but that’s key. We have to add metadata to money -- all money. Where did it come from? Who created it? Who had it last? Such smart money would be harder to steal, easier to trace, and therefore more difficult for the bad guys to use. Of course this goes completely in the face of trends like BitCoin, where money is not only identifiable but completely anonymous. Still I see these two trends as allied. BitCoin, while anonymous, embraces completely the concept of money being verifiable. You can’t counterfeit BitCoins. Further defining money is just a matter of adding additional metadata fields. And adding fields can be mandated by those entities that accept money like Central banks. Apple Pay is part of defined money but it needs more features. BitCoin is another piece, but more is needed. We’ll see these broadly discussed in 2015 but nothing will be resolved this year.
2 -- Entertainment. We spend so much of our time seeking diversion and so much of our disposable income buying it that anything fundamentally changing these behaviors will bring a revolution. For all the importance given to the word creativity when it comes to entertainment, this industry is for the most part reactive, not active. The TV, Movie, and music businesses want to hang onto the success they already have. But all are in turmoil. I’ve laid this out in column after column, but sometime soon a major cable or phone company is going to go truly a la carte and when that happens the entertainment business will be changed overnight. Again, I predict this won’t happen until 2016, but we’ll see precursors in 2015 like Dish Network’s new Over-The-Top service. Charlie Ergen of Dish is a dangerous man and I like that.
3 -- Wearables. The big frigging wearable deal of 2015 is supposed to be the Apple Watch, but it won’t be. The reason is simple -- Apple’s watch is too thick, costs too much, and doesn’t have enough battery life. It’s the 128K Macintosh of smart watches. There are ways to solve these problems -- ways that are quite obvious to me -- but they’ll require new chips that I know are coming but aren’t quite here. So here’s another one for 2016 when I guarantee watches will be thin, cheap, have batteries that last longer than those in your phone, and these watches will do more, too. But they’ll be doing it in 2016.
The Apple Watch is Cupertino grabbing mindshare and early adopter wallets, nothing else.
This is the time of year when when I typically write my technology predictions, an annual exercise in ego and humiliation I’ve been doing for about a dozen years. Historically I’ve been around 70 percent correct, which is more accurate than a random walk and therefore maybe -- maybe -- worth reading. You decide.
But first let’s look back at my predictions from a year ago to see how I did. Understand that I’m the only pundit who actually reviews his previous year’s predictions. If only I could be a weasel like those other guys!
#1 -- Microsoft gets worse before it gets better
MISS -- Satya Nadellaa longtime Microsoft veteran became the CEO. The early fear was an insider would continue Ballmer’s culture.Nadella has not and has brought a new management approach to Microsoft. He has opened communications with customers and partners, recognized Microsoft mistakes, and seems to be finding a place for Microsoft in a cross platform mobile world. The reins of Microsoft are clearly in Nadella’s hands. There is no apparent sign of resistance or conflict from Gates and Ballmer. So things did not get worse.Microsoft has clearly turned the corner and it headed in a better direction. Amazing things are possible when you hire the right person for CEO. I didn’t think Microsoft would.
#2 -- IBM throws in the towel. Any minute some bean counter at IBM is going to figure out that it is statistically impossible for the company to reach its stated earnings-per-share goal of $20 for 2015
HIT -- After 3 disappointing quarterly earnings in 2014, IBM admitted the $20 EPS would not be possible in 2015. Cuts throughout 2014 in IBM have seriously damaged the company. IBM continues to alienate itself with its customers. Revenue continues to drop. There are management crises in many parts of IBM but this isn’t obvious to the outside world yet. The board is getting a lot of heat.
IBM’s new strategy in 2014 was to double down on its old strategy -- CAMSS -- Cloud, Analytics, Mobile, Social, and Security.The only problem is IBM’s existing businesses are declining faster than the new businesses can grow. Until IBM stops damaging its Services divisions and repairs them, IBM will continue to falter.
#3 -- Blackberry to Microsoft (assuming Elop gets the top job at Microsoft).
MISS -- Microsoft did not buy Blackberry following its Nokia acquisition. In 2014 Microsoft squandered its investment in Nokia.The value of the Nokia brand and team is gradually being lost.Microsoft even has plans to get rid of the name Nokia.
#4 -- Intel does ARM, kinda.
MISS -- This has not happened… yet. IBM’s spinoff of its semiconductor operations to Global Foundries shows how hard and how long it takes to make changes like this. In 2014 most of the legal issues between Apple and Samsung were resolved. This dark chapter may be coming to an end and there may be less need for Apple to change foundries. Still, ARM is about to take over the world.
#5 -- Samsung peaks. With Apple gone (as a customer) and Samsung phone margins eroding, what’s the company to do? 4K TVs aren’t it. Samsung needs to actually invent something and I don’t see that happening, at least not in 2014.
HIT -- For the most part this prediction was correct. The good news is Samsung did not sit idly by in 2014. Early in 2015 at the CES show it announced a number of products, and a vision of a fully smart home.
#6 -- Facebook transforms itself (or tries to) with a huge acquisition.
HIT -- This prediction was correct. In 2014 Facebook acquired several companies.Instead of pursuing Snapchat, it bought WhatsApp. It also purchased Oculus VR, Ascenta, ProtoGeo Oy, PrivateCore, and WaveGroup Sound. It spent over $21B in acquisitions in 2014.
#7 -- Cable TV is just fine, thank you. Avram thinks cable TV will go all-IP.
HIT -- This prediction is on track! Over the last couple years cable TV has been quietly going 100 percent digital. With the recent announcement of the Sling TV by Dish Network there is a clear trend towards a true IP television service. Improvements by Roku and Google, and Amazon’s new Fire TV are all part of TV’s future. To stay competitive and relevant cable TV will have to offer low cost Internet devices for TVs and provide programming for them.
#8 -- The Netflix effect continues, this time with pinkies raised. Hollywood is for sale.
HIT -- Hollywood is still confused. Is Netflix just another distributor, or a producer, or a competitor? The late in 2014 cyber attack of Sony shows how out of touch Hollywood is with the Internet.
#9 -- What cloud? The cloud disappears.
HIT -- This prediction is correct, but not yet. If you ask IBM cloud is growing and strategic. It is growing and important, but is becoming a commodity service super-fast.Important progress is being made in cloud technology -- containers, Docker, Rocket, CoreOS, etc. The net result is there will be very little for a cloud provider to do to other than maintain a datacenter full of servers. Prices and profit margins will be tight.
#10 -- Smart cards finally find their place in America. I covered smart cards in Electric Money, my PBS series from 2001, yet they still aren’t popular in the USA. Smart cards, if you don’t know it, are credit or debit cards with embedded RFID chips that impart greater security though at a cost. They’ve been popular in Europe for 15 years but American banks are too cheap to use them... or were. The Target data breach and others will finally change that in 2014 as the enterprise cost of insecurity becomes just too high even for banks Too Big to Fail.
HIT -- This prediction was correct. Card readers across the country are being upgraded to support smart cards. Most responsible retailers are working to improve their credit card security at warp speed. Some retailers had no "change freeze" during the 2014 holiday shopping season. Normally during this peak sales period no changes to a retailer systems and networks is allowed. This year was different. Security projects continued through the holidays. The major credit card providers are retooling their systems. There were a few companies that did not take security seriously and left the front door wide open and unlocked -- Home Depot and Sony.Both were subjects of big data breaches in 2014.
Counting on my fingers I see that 2014 maintained my 70 percent average. My next column, coming soon will be my predictions for 2015. I think you’ll be surprised by what I see on the technical horizon.
Readers have been asking me to write about the recent network hack at Sony Pictures Entertainment. If you run a company like Sony Pictures it has to be tough to see your company secrets stolen all at once -- salaries, scripts, and Social Security numbers all revealed along with a pre-release HD copy of Annie, not to mention an entire database of unhappy Sony employees who want to work anywhere Adam Sandler doesn’t. But frankly my dear I don’t give a damn about any of that so let’s cut to the heart of this problem which really comes down to executive privilege.
Sony was hacked because some president or vice-president or division head or maybe an honest-to-God movie star didn’t want something stupid like network security to interfere with their Facebook/YouTube/porn/whatever workplace obsession. Security at Sony Pictures wasn’t breached, it was abandoned, and this recent hack is the perfectly logical result.
"I used to run IT for Sony Pictures Digital Entertainment", confirmed a guy named Lionel Felix in a recent blog comment, "and (I) know that there were a number of simple vectors for this kind of attack there. They ran IT there like a big small office with lots of very high-maintenance execs who refused to follow any security protocols. I’m surprised it took this long for this to happen".
High-maintenance execs are everywhere these days. At the same time average workers regularly go for years without a raise, we seem to live in the Age of High Maintenance Execs.
I wrote a column not on ago advising that entire corporate networks should be disconnected from the Internet for security reasons. If you want to post on Facebook or email your mother, do it on your smartphone using cellular, not corporate, data minutes. Yet somehow on network after network, these simple measures aren’t taken.
Let me get excruciatingly specific: in the case of nearly all the recent high profile corporate data breaches in the USA, the primary ISP involved was AT&T. This is not an indictment of AT&T at all, just the opposite. As far as I can tell AT&T did nothing wrong. But in every case I’ve looked at, AT&T customers effectively sabotaged their own security.
AT&T is the only ISP I know of that segregates its Multi-Protocol Label Switching (MPLS) private networks from Internet access. The client has to very specifically bridge the two to get to the Internet and they do it all the time. For AT&T this is an immutable law -- no private MPLS service has connectivity to the Internet. If you want Internet you order a second pipe. Yet Home Depot, JP Morgan, and Target all use the AT&T MPLS service so they specifically allowed their private networks to be bridged to the public network.
The bad guys were kept out until that happened.
This behavior goes against every classic IT rule of thumb except one. IT rule #1 is Hell no we can’t do that. There’s a long tradition of saying No in IT, yet here it didn’t happen. Rule #2 is we’ll need a lot more money and bandwidth to do that. Given AT&T’s position on the matter it should have been easy to score the required second pipe for Internet traffic, yet somehow it didn’t happen. Only Rule #3 -- Thank you sir, may I have another -- seems to have held, and therein lies the basic problem that IT can no longer stand up to executive management’s need for Twitter.
From where I sit it looks like the 500 million US financial records lost to hackers over the past 12 months come down mainly to executive ego. All these companies opened a door to the Internet so employees could do banking, listen to Internet radio, check their Gmail, and all allowed their businesses to be robbed in the process.
So get a 4G phone and leave the corporate network alone. If you must offer Internet, BYOD over a guest network connected locally via DSL.
You can build an IP-to-IP network with low-cost Internet. The difference is that you remove the default route to the Internet and remove NAT’ing for Internet access. Simply allow static routes that connect only to other office routes. Even if bad guys attack the network public IP address the router cannot reply because the route is not in the route table. Without NAT no user in the RFC 1918 IP subnets can access anything anyway. All traffic is routed over the encrypted VPN tunnels. Internet is at the hub points -- it is there that you decide if you want to open your network to the world. I vote no.
Yet these companies didn’t take the relatively simple steps needed to secure their data. Your company probably hasn’t, either.
Now folks at Google and Yahoo and other outfits that actually require the Internet to do business might see this somewhat differently. For that matter, I like paying my electric bill online and I’m sure the power company doesn’t mind getting money that way. So it’s not entirely simple. But what we’ve done is assume VPNs or https can handle everything when that’s just not true.
We need better rules about how to segregate traffic and design safer networks. And even faced with executive tantrums, IT has to be (re)empowered to just say no.
Ethernet inventor Bob Metcalfe, when I worked for him 20 years ago, taught me that we tend to over-estimate change in the short term and under-estimate it in the long term. So it can be pretty obvious what is coming but not at all obvious when. And what we know about the when of it is that making money from new technologies is often a matter of investing right before that bend upward in the hockey stick of exponential change.
We all know television is bound to enter a new era sooner or later. Heck, I’ve written dozens of columns on the subject over my 17 years in this job. But this is the first time I feel confident in saying when this TV transition will take place. It already has. Forces are already in motion that will completely transform TV over the next 24 months. Come back two years from today and it will all be different with at least a few new leaders and a few icons gone bust. Get ready for TV 3.0.
TV 1.0, starting in the 1940s, was over-the-air television. If you lived in the US there were generally three channels but that was enough to get us all to buy a TV and waste several hours per day watching it. Walter Cronkite was the God of broadcast TV.
TV 2.0 was cable, which relied on the installed base of TVs created by broadcast but expanded the programming to dozens and then hundreds of specialized channels piped in through a new terrestrial network. Cable could never have happened without broadcast happening first. Today in the US local broadcast channels still exist (now generally 5-10 per major media market) but 85 percent of us receive even local broadcast channels through a cable or satellite connection. What keeps local broadcasters broadcasting, rather than becoming just local cable channels, is the value of the radio spectrum they own, which they’ll lose if they don’t continue to use it (more on this below). TV 2.0 deepened the markets for both content and advertising by allowing, for example, more adult channels as well as very localized (and significantly cheaper) commercials. HBO and CNN were the twin Gods of cable TV.
TV 3.0 is so-called Over-the-Top (OTT) television -- streaming video services that rely on cheap Internet service. Pioneers in this field were Hulu, Netflix, and YouTube, each providing a different kind of service and using a different business model. This is continuing the trend of each TV generation building on the installed base of the one before. OTT television requires the extensive wired broadband networks built by the phone and cable companies.
There’s an irony here that cable Internet is the seed of cable TV’s own demise but it makes some sense if you look at cable TV economics, which generally has three sources of revenue: TV, telephone, and Internet. Cable TV is at best a break-even business with most subscriber revenue passed back to content providers like HBO and even local broadcast channels. Cable telephone service was a profit center for many years, undercutting local phone companies, but the rise of OTT Internet phone services like Vonage and, especially, the broad adoption of mobile phones with flat-rate long distance has taken most of the profit out of telephony for cable companies, too. That leaves Internet as the only major profit center for cable companies today. They make most of their money from Internet service because, unlike TV, it carries no content costs.
If cable companies were more pragmatic and gutsier, they’d shed their TV and phone services entirely and become strictly Internet Service Providers. Indeed, this is the generally accepted future for cable TV. That’s the what, but as always there’s that very problematic when. No cable company has been willing to take the risk of being the first to pull those plugs, because the first one or two are generally expected to fail.
So instead the cable companies have been clinging to a hope that they’ll find ways to extend more profitable aspects of their TV business model to the Internet. This is where the fight over Net Neutrality comes in. Cable and phone companies want to sell fast lane access to OTT networks, boosting cable ISP profits. OTT video providers, in turn, want to not pay such carriage fees while consumers don’t want to pay more and are also wary of anything that smacks of Internet censorship.
The Net Neutrality debate would seem to be a check on the progression of TV 3.0, but recent events suggest that the trend is only accelerating. First, the Obama Administration has finally come down hard on the side of Net Neutrality, snubbing the FCC’s current deliberations in the process. Second, there is a wildly successful FCC spectrum auction underway right now for control of radio frequency to be used for expanded wireless data services. Third, there are suddenly a lot of new OTT video services from CBS, HBO, Sony and others. These three trends suggest to me that the TV 3.0 dam is about to break and we’ll see a grand bargain cut between the parties fighting over their use of the Net.
The Administration’s demanding that Internet service be put under Title II of the Communications Act, making it a regulated service. That scares the ISPs. So does all this extra supply of wireless broadband spectrum. The cable companies are suddenly at risk of being unseated by new wireless services just the way Wi-Fi is supplanting wired Ethernet. Verizon is preparing to do this right now with technology bought from Intel. And at some point TV broadcast license holders are likely to give up their spectrum, too, for wireless Internet service, making tens of billions of dollars in the process.
My guess is that the Internet stays unregulated while still mandating open and fair access to consumers and backbone connections. With that issue out of the way, TV 3.0 can move forward. And so we have HBO, CBS and Sony leading a new class of OTT services, each trying to be early enough to benefit from the coming wave, yet late enough not to be a casualty of technology change.
In response to these new networks, rumors abound that existing OTT players are going to adapt or expand their business models to better compete in the changing market. Amazon is rumored to have an ad-supported streaming video service in the works to give the online retailer more production money to compete in what is becoming a fight between studios. Amazon denies that it is working on such a service but that doesn’t mean it won’t appear.
Netflix swears it will never show ads but I’ve spoken to two people who were being recruited to help build a Netflix ad network. While some Netflix subscribers might view pre-roll ads as a betrayal, I think it bodes well for its ability to spend on even more original programming in a very competitive market for higher-quality content and especially exclusive content such as House of Cards or Orange is the New Black. That costs money, lots of money.
All of these TV initiatives will hit the US market in 2015. If the OTT networks are successful they’ll accelerate the dissolution of cable TV into Internet-based a la carte video services that should both broaden and deepen available programming. And this is key: this transition is inevitable, and while there will be losers there will be far more winners and a generally positive impact on the US economy.
It’s no longer a matter of if TV 3.0 is coming, but when.
Given IBM’s earnings miss last week and the impact it had on company shares I thought rather than just criticizing the company it might make better sense to consolidate my ideas for how to fix IBM. Here they are.
Early in his tenure as CEO, Sam Palmisano made changes that created IBM’s problems today. IBM customers are buying fewer products and services. Revenue has dropped each quarter for the past ten. Sam’s changes alienated IBM customers, many of whom are ending what has been in many cases a multi-decade relationship. No amount of earnings promises, no amount of financial engineering, will fix this problem.
IBM forgot the most important part of running a business. While shareholder value is important, it is customers that make business possible.
Under Sam’s leadership IBM began to cut quality, cut corners, and under-deliver on its commitments. IBM squeezed every penny out of every deal without regard to the impact it would have on customers. And those customers have paid dearly. The reputation IBM earned over a century was ruined in a few short years.
What Ginni Rometty and IBM need to do now is simple: Stop doing the things that are damaging IBM. Go back to customers being a corporate priority. They are after all the folks who generate IBM’s revenue.
The Services Problem -- IBM needs to fix Global Services, the company’s largest division touching the most customers and a catalyst for IBM sales. For IBM to succeed it needs a strong and effective services organization. IBM’s Cloud strategy, for example, cannot be financially successful without Services. If Services fail, IBM will fail. It is that simple.
IBM’s Global Services have seen the worst cost cuts and the most layoffs. These cuts have hurt IBM customers. Many contracts have been canceled and sales lost. IBM is no longer considered to be a trusted supplier by many of its customers.
To fix IBM it needs to invest in Global Services and in its people. Yes, there are already quality improvement programs and automation projects, but these efforts are new, few, and small. Their focus is at the account level, not the organization. Most of the problems are at the leadership level, where profit has been the only priority. Global Services is in crisis and IBM needs to get serious about fixing this organization.
IBM is hemorrhaging talent on a global scale across all divisions. It cannot retain good people. IBMers, as they call themselves, are underpaid, neglected, and have been abused for years. Most of IBM’s 400,000+ employees are no longer working for the company. Their jobs have become nightmares. They are prevented from doing good work. They know IBM is neglecting its customers, but they are powerless to do anything. The best they can do is to try to survive until reason returns to IBM’s leadership, if ever.
Every IBM staff cut now has a direct impact on revenue. After the 1Q 2014 earning miss IBM hit its sales support teams hard with layoffs, making it immediately much harder for IBM to sell products and services. Customers became frustrated and shopped elsewhere. In 3Q 2014 revenue took a big fall as a direct result of this earlier bonehead move. Formerly growing lines of business in IBM are now declining. After 10 years of continuous layoffs, any subsequent reduction has a direct and immediate impact on business. IBM can no longer afford to cut staff.
IBM needs to stop staff cuts and start doing the things needed to retain its good workers, including paying them better.
Global Services is horribly inefficient. There is very little automation. The business information systems are poor. IBM has too many people managing accounts and too few servicing them. Global Services is in serious need of a business process redesign and better information systems. For the last 10 years all IBM has done is replace skilled American labor with cheap offshore labor. IBM’s workforce of 400,000 workers looks impressive. Ask IBM how many of them have minimal education and have worked for IBM for less than 3 years? That huge workforce is now a hollow shell of a once great company.
If IBM can invest $1.2 billion in the Cloud, why can’t it invest $200 million in Global Services? A wise investment could cut in half the number of people needed to manage IBM’s accounts. It could allow IBM support teams to operate proactively instead of reactively. The client experience would be greatly improved – fewer problems, things running better. If IBM’s Services customers were happy, business retention would be better, more products and services could be sold. This division could again be the business catalyst of the corporation. It is time to manage it better and make modest investments in it.
To look at this another way, if IBM continues to neglect Global Services and does not invest in it, then those billions of dollars in investments in Cloud and Analytics will be wasted. If IBM is to grow again, Services must again become IBM’s most important business. As goes Services, so goes the whole corporation.
The Cloud Problem -- Cloud computing is one of IBM’s gambles for future prosperity. Cloud means different things to different people but what is important for IBM is to understand the business reasons behind the Cloud. It is part of an evolutionary process to reduce the cost of computing. This means less expensive computing for customers and lower profit margins for IBM. It means reduced hardware sales. It implies there will be reduced support costs from Services, too. This is the opposite of what IBM is now telling itself about the Cloud.
For IBM to be profitable in Cloud computing it needs to provide value-added services with its Cloud platform. Most Cloud offerings are a Platform as a Service (PaaS). There isn’t enough profit in PaaS for IBM to get a good return on its multi-billion dollar investment. IBM needs to provide additional things with its Cloud service -- Services and Software as a Service (SaaS). To provide Cloud SaaS IBM needs to have software applications that the market needs. It doesn’t. The biggest market for Cloud Saas is not with IBM’s huge legacy customers, it is the other 80 percent of the market consisting of not-so-big companies that IBM has served poorly (if at all) in recent years. They will want something that is cost effective and "just works". IBM does not have in its product portfolio the business applications these customers need. In this area IBM is dangerously behind and faces stiff competition from firms like Amazon, Microsoft, Google -- even Oracle. IBM urgently needs to invest in the software its next generation of customers will want to use.
The Software Problem -- IBM’s software business was one of its brighter stars in 2013. It enjoyed sales growth and good profit margins. The problem is IBM’s software business is far from where it needs to be. To understand IBM’s situation in software, look at Oracle. For years Oracle was a database company. Today Oracle is much more than a database company. It has developed and acquired a portfolio of business applications. If you want an HR system or an accounting system you can find it at Oracle. When it comes to software IBM is still very much in the 1970s. It sells the tools their customers need to write their own business applications. If you have a business and want to purchase finished software you can use to run your business, IBM will probably not be your first choice. While IBM’s software division has been growing nicely, its long-term potential is limited because it is not aligned to the needs of the market.
IBM’s current management approach has crushed the life from most of the software companies it has purchased. Software is not a business you can carve into pieces and scatter all over the world. Software works best when there is a short and tight communications link between the customer and a dedicated product development team. Product development needs to understand the needs and directions of the customers, it needs to be empowered to design new products and versions that will increase its value to the market, and it needs to be enabled to produce those products and versions quickly and efficiently.
IBM’s announcement last week of running SAP HANA is a step in the right direction. Though I think SAP will be making more money on the deal than IBM.
Here’s the key: every company in existence needs an accounting program, order processing, inventory management, distribution and other types of "run your business" applications. All of IBM’s big customers already have the applications they need. They’ve had them for 25 years. It is all the smaller companies that could use better and cheaper application. These organizations make up 90 percent of the IT market and are not served well -- if at all -- by IBM. This is where the big money in Cloud is.
IBM used to have a lot of "run your business" software. Since the demise of the old General Systems division in a political bloodbath a lot of this software has faded out of existence. This is a huge problem for IBM.
Oracle bought PeopleSoft and got a very good accounting system and a very good HR system. It has bought other companies who sell "run your business" software. Computer Associates (CA) has bought many of these types of businesses too. If you want to buy an accounting package, I don’t think IBM would have anything to sell you. It probably doesn’t have anything it could put in the Cloud, either. But Intuit (Quickbooks) has an accounting package and figured out it can make more money be selling it as a service. So did Salesforce.com. These companies are years ahead of IBM.
For Cloud to become a big money maker for IBM, IBM needs to buy applications -- big time. Maybe it should buy Intuit, Salesforce, etc. Could it afford to buy CA? Can it afford not to buy CA?
Software as a Service (SaaS) is critical for IBM’s Cloud to be financially successful. Unfortunately today IBM does not have software that customers want to use or need for their business. IBM needs to be a lot smarter about its software investments and completely change how it manages this business.
The Mobile Problem -- IBM invented the first smart phone (the Simon) in 1993. Today IBM is completely non-existent in the mobile market. Apple and Google are the leaders; Microsoft has been working very hard and making enormous investments to get a foothold in this market. That said, Microsoft is light-years ahead of IBM. IBM has completely missed the biggest change in Information Technology in a decade. This should speak volumes about the leadership at IBM and why it needs a large scale change in management.
IBM cannot buy its way into the mobile market. If it isn’t working for Microsoft, it won’t work for IBM. Then again IBM does not have to make big acquisitions to become a big player. IBM needs to think differently. IBM should start by looking at the App Stores of Apple and Google. There IBM will find tens of thousands of applications, most of them written by individuals and small companies. This can be an archetype for a whole new IBM behavior -- creativity. IBM needs its vast workforce to come up with ideas, act on them, and produce mobile applications. IBM should have its own App Store. This will give a way to learn how to use the new mobile platforms. It will provide a way for the application developers to interact with IBM’s customers. Over time IBM will learn and develop mobile technology that is useful to IBM’s customers. This is a market where seeing and using a live application is much better than marketing copy in a sales presentation.
IBM should be partnering with Apple, Google, and yes -- Microsoft. There should be no favorites. IBM already has a mobile deal of sorts with Apple but it is key to understand that it has so far resulted in a total head count increase in Cupertino of two workers, which shows what Apple thinks of IBM. Apple is not enough.
IBM should license development tools for every mobile platform. These tools should be made available to any employee with an interest in developing a mobile application. IBM should make it easy for employees to get mobile devices, especially tablets. IBM should provide internal infrastructure -- servers, applications, etc. with which to develop and demonstrate mobile computing. The better IBM understands mobile technology, the sooner and better IBM can support its customers. There is a place in the mobile market for IBM. It must make up for lost time and become everyone’s trusted partner.
The Quality Problem -- The best definition of quality is "delighting the customer". Quality means being able to do the same thing tomorrow, better, faster, and cheaper. Quality is continuous improvement. It is possible to improve quality and at the same time reduce labor and costs. Companies that have mastered this skill went on to dominate their markets. Quality is a culture, an obsession. It must start from the top and involve everyone -- IBM’s executives, all levels of management, employees, suppliers, even customers.
Sometime soon one of IBM’s competitors will implement a serious Continuous Quality Improvement program. When that happens, IBM will be toast. History has shown that when a company trashes its quality, neglects its customers, and makes earnings its only priority -- bad things happen. Over the last 50 years, the USA has lost many industries this way. If IBM does not get serious about quality its survival will be at risk.
The Respect Problem -- Today in IBM "Respect for the individual" is dead. So is "Superlative customer service". Every decision made by IBM for the last 10 years has been to find ways to spend nothing, do as little as possible, and get to $20 EPS. IBM’s workforce is operating in survival mode. They have no voice, no means to make IBM better, and they are certainly not going to stick their necks out. IBM is squandering its greatest resource and most of its best minds. Most of IBM’s businesses are declining. As business declines IBM cuts staff. Quality and services get worse and business declines even more. Execution gets worse. Every day customers trust and respect IBM less. They buy less. IBM needs to break this cycle of insanity. It needs to start treating their employees better and mobilize them to save the company.
The Leadership Problem -- IBM has no vision, none, nada, zip. CEO Ginni Rometty and her cadre have no clue how to fix what’s wrong with IBM. And even if they did, they are too tainted by the current state of the company -- a state they created. IBM executives are for the most part in a state of paralysis. They don’t know what to do. They know their business has serious problems. Even if they knew what to do they’re afraid to act. Ginni and the $20 EPS target had most of the senior executives frozen from doing anything. This type of management style can be fatal for business. The CEO should be helping each division become more successful. Because $20 EPS has been the only goal, IBM’s senior leaders have become unable to manage their businesses.
Here’s why current management can’t do the job. If the current VP-level managers at IBM (the only level, by the way, that’s allowed to even see the budget) take action to spend some of the profit to fix their businesses, they’ll be in hot water with Ginni Rometty and IBM’s culture of blame simply for trying to do something. Just watch: Ginni’s plan to save the company will involve further cuts. You can’t cut your way to prosperity. If they take action and the fix doesn’t work, then they’ll be punished for trying. So the only obvious option current management seems to have is stand by and watch Ginni and the finance department kill their divisions.
IBM has been here before, back in 1993 under CEO John Akers the company cratered, booking an $8 billion loss on $40 billion in sales. That’s when, for the first time in its history, IBM turned to an outsider to be CEO. They should do this again -- find another Gerstner, ideally a better Gerstner, because he had a share in creating the current crisis, too.
Ginni Rometty has to go and with her much of IBM’s board.
The Bottom Line -- IBM has never been the low cost provider of anything, yet a company of IBM’s size and talent should be able to be the undisputed lowest cost, highest volume supplier in the industry. IBM’s leaders are mostly from the sales organizations. Those with expert operational and leadership skills don’t go far in IBM. A new way of thinking is needed in every corner of IBM. Every line of business should be asking itself "how can we become the best, cheapest, and biggest supplier?" Every line of business should have well reasoned plans, funding to act on those plans, and a green light to proceed.
Thankfully IBM gave up on its 2015 goal of $20 EPS. Unfortunately IBM still plans to continue cutting staff to reach prosperity, which is insane. By now it should be painfully obvious this is destructive to the company. IBM needs to step back and be honest with itself and its shareholders. It needs to set reasonable budgets and financial expectations. It needs to spend more on its people and on improvements. IBM needs to regroup and repair the company. For the next three to five years IBM should plan on turning in lower, but still good profits. If it does this by 2020 it could again be a business juggernaut.
This week, of all weeks, with IBM seemingly melting-down, you’d think I’d be writing about it and I have been, just not here. You can read two columns on IBM I published over at forbes.com, here and here. They are first day and second day analyses of IBM’s earnings announcement and sale of its chip division to GlobalFoundries. I could publish them here three days from now but by then nobody will care so instead I’ll just give you the links.
One thing I can do here is consider the way IBM CEO Ginni Rometty is spinning this story. She was all over the news on Monday repudiating the 2015 earnings target set by her predecessor Sam Palmisano and more or less claiming to be a victim -- along with the rest of IBM -- of Sam’s bad management. Well she isn’t a victim. Ginni was an active participant in developing the Death March 2015 strategy. And as CEO -- now CEO and chairman -- it’s laughable to contend, as Ginni apparently does, that she has been somehow bound by Sam’s bad plan.
When you get the big job the whole idea is that you start calling your own shots.
Ginni Rometty is in trouble along with the rest of IBM. She’s leading a failed strategy laid out in gruesome detail in my IBM book (link -- buy a bunch, please) and appears to have little or no idea what to do next. Having finally repudiated that crazy 2015 earnings target, the market will allow Ginni to reinvest some of that money in trying to save IBM’s core businesses. But it isn’t at all clear she knows where to invest or even why. If that’s true, Ginni Rometty won’t survive much longer.
A bold turnaround plan is coming from Ginni, we’re told. And I’ll be coming up with my own plan, too, which will be interesting to compare and contrast. Look for that here at the end of the week.
Some readers see this as pointless. They don’t care about IBM and don’t want to be bothered reading about it. Well I’m sorry, but I have a sentimental connection with the company. And that’s not to mention all the IBM customers, retirees, and current employees who wonder what the heck is actually going on?
As Churchill said of the USSR, IBM has been a riddle wrapped in a mystery inside an enigma. But maybe not for much longer.
Two weeks ago IBM told the IT world it was taking on Intel in the battle for server chips with new Power8 processors incorporating advanced interconnection and GPU technology from NVIDIA. This followed an announcement earlier in the year that Google was using Power8 processors in some of its homemade servers. All this bodes well for IBM’s chip unit, right?
Not so fast.
Some product announcements are more real than others. While it’s true that IBM announced the imminent availability of its first servers equipped with optional Graphical Processing Units (GPUs), most of the other products announced are up to two years in the future. The real sizzle here is the NVlink and CAPi stuff that won’t really ship until 2016. So it is marketing a 2016 line of Power8 vaporware against a 2014 Intel spec. No wonder it looks so good!
This new technology won’t contribute to earnings until 2016, if then.
Worse still, the Power8 chips rely on a manufacturing process that’s behind Intel from a division IBM was trying to pay to give away (without success) only a few weeks before. And that Google announcement of Power8 servers didn’t actually say the search giant would be building the boxes in large numbers, just that it was testing them. Google could be doing that just to get better pricing from Intel.
This is IBM marketingspeak making the best of a bad situation. It’s clever leveraging of technology that is otherwise becoming rapidly obsolete. If IBM wants Power to remain competitive it’ll have to either spend billions to upgrade its Fishkill, NY chip foundry or take the chips to someplace like TSMC which will require major reengineering because TSMC can’t support IBM’s unique -- and orphaned -- copper and silicon-on-insulator (SOI) technology.
IBM is right that GPUs will play an important part in cloud computing as cloud vendors add that capability to their server farms. First to do so was Amazon Web Services with its so-called Graphical Cloud. Other vendors have followed with the eventual goal that we’ll be able to play advanced video games and use graphically-intensive applications like Adobe Photoshop purely from the cloud on any device we like. It’s perfectly logical for IBM to want to play a role in this transition, but so far those existing graphical clouds all use Intel and Intel-compatible servers. IBM is out of the Intel server business, having just sold that division to Lenovo in China for $2.1 billion.
Key to understanding how little there is to this announcement is IBM’s claim that the new servers it is shipping at the end of this month have a price-performance rating up to 20 percent higher than competing Intel-based servers. While 20 percent is not to be sneezed-at, it probably isn’t enough to justify switching vendors or platforms for any big customers. If the price-performance were, say, double that might be significant, but 20 percent -- especially 20 percent that isn’t really explained since IBM hasn’t revealed prices yet -- is just noise.
Those who adopt these new IBM servers will have to switch to Ubuntu Linux, for example. Industry stalwart Red Hat doesn’t yet support the platform, nor does IBM’s own AIX. Switch vendor, switch operating system and maybe gain 20 percent won’t cut it with most IT customers.
And we’ve been here before.
From Wikipedia: "The PowerPC Reference Platform (PReP) was a standard system architecture for PowerPC based computer systems developed at the same time as the PowerPC processor architecture. Published by IBM in 1994, it allowed hardware vendors to build a machine that could run various operating systems, including Windows NT, OS/2, Solaris, Taligent and AIX".
PReP and its associated OSF/1 operating system were an earlier IBM attempt to compete with and pressure Intel. Intel kept improving its products and made some pricing adjustments, destroying PReP. Linux basically killed OSF/1 and every other form of Unix, too. OSF/1 came out in 1992. PReP came out in 1994. By 1998 both were forgotten and are now footnotes in Wikipedia.
IBM’s OpenPower is likely to suffer the same fate.
Back in the 1980s, when I was the networking editor at InfoWorld, one of my jobs was to write profiles of corporate networks. One of those profiles was of the Adolph Coors Brewing Company of Golden, Colorado, now known as Molson Coors Brewing. I visited the company’s one brewery at the time, interviewed the head of IT and the top network guy, then asked for a copy of the very impressive network map they had on the wall.
"Sorry, we can’t give you that," they said. "It’s private".
"But we always print a map of the company network," I explained.
"Fine, then make one up".
And so I invented my own map for the Coors network.
There’s a lesson here, trust me.
Back then there was no commercial Internet. The Coors network, like every other corporate computer network, was built from leased data lines connecting the brewery with sales offices and distribution centers in every state except Indiana at the time. Such networks were expensive to build and the people who ran them were quite proud.
Today we just find a local Internet Service Provider (ISP) and connect to the Internet, a much simpler thing. If we want secure communications we build Virtual Private Networks (VPNs) that encrypt the data before sending it across the public Internet and decrypt it at the other end. We do this because it is easy and because it is cheap.
IT used to cost a lot more than it does today and cheap Internet service helps make that possible.
Cheap Internet service also made possible every major corporate security breach including the big retail hacks and data theft at Target and Home Depot as well as the big JP Morgan Chase hack revealed just last week that compromised the banking information of at least 89 million customers.
How cheap is IT, really, if it compromises customer data? Not cheap at all.
Last year’s Target hack alone cost the company more than $1 billion, estimated Forrester Research. The comparably-sized Home Depot hack will probably cost about the same. JP Morgan Chase is likely to face even higher costs.
Here’s the simple truth: it makes no sense, none, nada, for a bank to send financial transactions over the public Internet. It makes no sense for a bank or any other company to build gateways between their private networks and the public Internet. If a company PC connects to both the corporate network and the Internet, then the corporate network is vulnerable.
At Target and Home Depot the point-of-sale (cash register) systems were compromised, customer data was gathered and sent back to the bad guys via Internet. Had there been no Internet connection the bad guys could never have received their stolen data.
Taking a bank or retail network back to circa 1989 would go a long way toward ending the current rash of data breaches. It would be expensive, sure, but not as expensive as losing all the money that Target and others have recently done.
This is the simple answer, yet few companies seem to be doing it. The reason for that, I believe, is that professional IT management in the old sense no longer exists at most companies. And public companies especially are so trained to cut IT costs that they’ll continue to do so even as their outfits lose billions to hackers. Besides, those losses tend to be charged to other divisions, not IT.
Back at Coors they loved that I designed my own incorrect network map because it would be that much harder for an outsider to gain access to their network and steal data. IT people thought about such things even then. Until we re-learn this lesson there will always be network hacks.
Some corporate and government data simply doesn’t belong on the Internet. Why is that so hard to understand?
A son of mine, I’m not saying which one, borrowed from my desk a credit card and -- quick like a bunny -- bought over $200 worth of in-game weapons, tools, etc. for the Steam game platform from steamgames.com, which is owned by Valve Corp. Needless to say, the kid is busted, but the more important point for this column is how easily he, for a time, got away with his crime.
I would have thought that vendors like steamgames.com would not want children to be buying game stuff without the consent of their parents, yet they made it so easy -- too easy.
When I use a credit card to buy something online it seems like they always ask for a billing address or at least a billing zip code, but not at store.steampowered.com. My kid didn’t know the billing address for the credit card because it isn’t our home address and isn’t (or wasn’t, I should say) even in California. It was a business credit card and the business was based in another state.
I asked about this when I checked with the bank to see who had been using my card and they told me all those security functions like asking for the billing zip code and the security number on the back aren’t required by the bank or the credit card issuer at all, but by the merchant, to minimize fraud.
In light of this, it only makes sense that steamgames.com wants kids like mine to buy stuff using whatever means they have available. Maybe their parents won’t notice.
Under US law, since my son is under 18, I can probably call and get the charges reversed. I’ll try that tomorrow. But in a culture where bad guys seem to lurk everywhere trying to steal our identities and worse, it’s pretty disgusting to see a company (Valve Corp.) that doesn’t appear to give a damn.
What do you think?
Alibaba’s IPO has come and gone and with it Yahoo has lost the role of Alibaba proxy and its shares have begun to slide. Yahoo’s Wall Street honeymoon, if there ever was one, is over, leaving the company trying almost anything it can to avoid sliding into oblivion. Having covered Yahoo continuously since its founding 20 years ago it is clear Y! has little chance of managing its way out of this latest of many crises despite all the associated cash. But -- if it will -- Yahoo could invest its way to even greater success.
Yahoo CEO Marissa Mayer, thinking like Type A CEOs nearly always seem to think, wants to take some of the billions reaped from the Alibaba IPO and dramatically remake her company to compete again with Google , Microsoft , Facebook, and even Apple. It won’t work.
Those ships have, for the most part, already sailed and can never be caught. Yahoo would have to do what it has been trying to do ever since Tim Koogle left as CEO in 2003 and regain its mojo. There is no reason to believe that more money is the answer.
It’s not that Mayer isn’t super-smart, it’s that the job she is attempting to do may be impossible. She has the temperament for it but the rest of Yahoo does not. Even if she fires everyone, Yahoo still has a funny smell.
In practical terms there are only two logical courses of action for Mayer and Yahoo. One is to wind things down and return Yahoo’s value to shareholders in the most efficient fashion, selling divisions, buying back shares, and issuing dividends until finally turning out the lights and going home. That’s an end-game. The only other possible course for Yahoo, in my view, is to turn the company into a Silicon Valley version of Berkshire Hathaway. That’s what I strongly propose.
Mayer seems to be trying to buy her way ahead of the next technology wave, but having been at this game for a couple of years so far, it isn’t going well. Lots of acqui-hires (buying tech companies for their people) and big acquisitions like Tumblr have not significantly changed the company’s downward trajectory. That’s because that trajectory is determined more by Google and Facebook and by changes in the ad market than by anything Yahoo can do. It’s simply beyond Mayer’s power because no matter how much money she has, Google and Facebook will always have more.
It’s time to try something new.
While Berkshire Hathaway owns some companies outright like Burlington Northern-Santa Fe railroad and GEICO, even those are for the most part left in the hands of managers who came with the businesses. At Coke and IBM, too, Berkshire tends to trust current management while keeping a close eye on the numbers. Yahoo should do the same but limit itself to the tech market or maybe just to Silicon Valley, keeping all investments within 50 miles of Yahoo Intergalactic HQ in Sunnyvale.
Yahoo’s current stakes in Alibaba and Yahoo Japan are worth $36 billion and $8 billion respectively and Alibaba at least appears to be on an upward trajectory. With $9 billion in cash from the Alibaba IPO Yahoo has at least $50 billion to put to work without borrowing anything. $50 billion is bigger than the biggest venture, private equity or hedge fund.
Mayer is smart, but maybe not smart enough to realize the companies in which she is interested could do better under their own names with a substantial Yahoo minority investment. That would leverage Yahoo’s money and allow a broader array of bets as a hedge, too. Mayer can pick the companies herself or -- even better -- just participate in every Silicon Valley B Round from now on, doing a form of dollar cost averaging that puts $15 billion to work every year. With future exits coming from acquisitions and IPOs (and possibly winding-down its own tech activities) Yahoo ought to be able to fund this level of investment indefinitely. Yahoo would literally own the future of tech.
Silicon Valley companies that make it to a B Round (the third round of funding after seed and A) have dramatically better chances of making successful exits. Yahoo wouldn’t have to pick the companies, Hell they wouldn’t even have to know the names of those companies, just their industry sectors and locations. Forty years of VC history show that with such a strategy investment success would be practically guaranteed.
As opposed to the company’s current course, which is anything but.
Right now, depending who you speak with, there is either a shortage or a glut of IT professionals in the USA. Those who maintain there is a shortage tend to say it can only be eliminated by immigration reform allowing more H1-B visas and green cards. Those who see a glut point to high IT unemployment figures and what looks like pervasive age discrimination. If both views are possible -- and I am beginning to see how they could be -- we can start by blaming the Human Resources (HR) departments at big and even medium-sized companies.
HR does the hiring and firing or at least handles the paperwork for hiring and firing. HR hires headhunters to find IT talent or advertises and finds that talent itself. If you are an IT professional in a company of almost any size that has an HR department, go down there sometime and ask about their professional qualifications. What made them qualified to hire you?
You’ll find the departments are predominantly staffed with women and few, if any, of those women have technical degrees. They are hiring predominantly male candidates for positions whose duties they typically don’t understand. Those HR folks, if put on the spot, will point out that the final decision on all technical hires comes from the IT department, itself. All HR does is facilitate.
Not really. What HR does is filter. They see as an important part of their job finding the very best candidates for every technical position. But how do you qualify candidates if you don’t know what you are talking about? They use heuristics -- sorting techniques designed to get good candidates without really knowing good from bad.
Common heuristic techniques for hiring IT professionals include looking for graduates of top university programs and for people currently working in similar positions at comparable companies including competitors. The flip side of these techniques also applies -- not looking for graduates of less prestigious universities or the unemployed.
The best programmer I know is Paul Tyma, 2014 Alumnus of the Year of the College of Engineering at the University of Toledo in Ohio. Paul later got a PhD from Syracuse University and that is what scored him an interview at Google where he became a senior developer, but it’s doubtful that would have happened had he settled for the U of T degree where he learned most of his chops.
It’s very common for the best programmer in any department to have a low quality degree or sometimes no degree at all. This person, this absolutely invaluable person, would generally not make the HR cut for hiring at their company today. Those interviewers from the IT department would never know they existed.
Same for the unemployed. Layoffs are deadly for IT reemployment. If you don’t know who to interview it’s easier just to decide you’ll only talk with people who are already working somewhere. A bad employed programmer is viewed as inherently superior to a very good unemployed programmer. This of course eliminates from consideration anyone who was laid-off for any reason. Speaking as a guy who was fired from every job I ever had (you’d fire me, too -- believe me) if I was trying to find a technical job today I’d probably never work again.
It doesn’t matter why you lost your job. The company moved and you couldn’t move with it for some family reason. Your startup failed. Your boss was an asshole. You were an asshole, but a brilliant one. You were older and dumped (illegally I might add) to save money. It doesn’t matter how smart or skilled you are if HR won’t even put your name on the interview list.
One way around this is the moment you are fired or laid-off go back to school. When you graduate with that new degree or certificate you’ll be desirable again -- in debt, but desirable.
And so we have the appearance of IT labor shortages at the same time we have record IT unemployment. And because the head of HR isn’t going to admit to the CEO that such bonehead policies exist, they are kept secret and the CEO urged to lobby for immigration reform.
Headhunters don’t help, either, because they see the source of their hefty commissions as luring working programmers from one company to another. Unemployed programmers don’t need luring and so don’t need headhunters.
There are exceptions to these trends, of course, but they are rare.
Those ladies down in HR are typically damaging their companies while simultaneously working very hard trying to do what they believe is good work. It’s a paradox, I know, and one that’s for the most part unknown by the rest of society.
The answer, of course, is to either improve the quality of HR departments, making them truly useful, or make them dramatically less powerful, maybe eliminating them entirely from hiring.
I’d recommend doing both.
Photo Credit: zwola fasola/Shutterstock
As we all know, Apple last week announced two new iPhones, a payment service (ApplePay), and a line of Apple Watches that require iPhones to work. There’s not much I can say about these products that you can’t read somewhere else. They are bigger and better than what preceded them and -- in the case of ApplePay and the AppleWatch -- just different. They are all topnotch products that will stand out in the market and have good chances of being successful. So instead of writing about products we already know about, I’d like to write about moats to protect products from competition.
Moats, as you know, are defensive fortifications typically built to surround castles, making them harder to storm. In order to even get to the castle, first you have to get past the moat which might be filled with water and that water might, in turn, be covered with burning oil.
Moats are important in business because Berkshire-Hathaway CEO Warren Buffett likes moats. He likes the businesses in which he invests to have large and defensible product or service franchises with those defenses characterized as moats. Buffett’s the guy who coined the term, in fact.
Warren Buffett and his moats have also lately been a real pain in my ass.
I have a book out, you see, called The Decline and Fall of IBM, and the easiest way to criticize my book is to ask, "If you are right, Bob, and IBM is in big trouble, why does Warren Buffett (the numero uno investor in the whole danged world) have so much money invested in the company?" Since this is a column about Apple, not IBM, I’ll just say that Warren loves moats, Apple doesn’t believe in them, and -- for the kinds of businesses Apple and IBM are in -- Apple is right and Warren is wrong.
It’s easy to criticize Apple on moats because from the outside looking in it appears that Apple had the lead in PCs and lost it, had the lead in laser printers and lost it, had the lead in graphical workstations and lost it, had the lead in music players and saw it erode, had the lead in smart phones and lost it, had the lead in tablets and saw it erode, had the lead in video and music downloads and saw those erode, too. Apple appears to invent businesses then either lose them or watch them fade away. No moats.
If Apple had moats it would fight for market share based on price, shaving margins. If Apple had moats it would stop inventing new product categories because it would no longer have the need to invent new product categories. If Apple had moats, we’re told, it make more money.
But wait a moment, isn’t Apple already among the ten most profitable companies on Earth? How much more money does it need to make to be considered successful? If profits are the measure of business success, it’s hard to see Apple as anything but the biggest technology business success of the 21st century.
Apple makes more money than any high tech company with a moat ever did including Microsoft, Intel, and IBM. So why have a moat?
A moat is an insurance policy. Moats are attractive to people who don’t have direct influence on the operations of the companies in which they invest. If Apple had moats, the idea goes, Apple could remain hugely profitable for years to come even with the loss of 50 corporate IQ points.
Companies with moats are supposed to be better prepared for enduring stupid mistakes or unforeseen market shifts.
I know Apple fairly well and it doesn't like moats because it doesn't like playing defense. Instead of being able to withstand any assault, Apple sees itself as more of an Agile Fighting Force capable of taking the battle to any competitor in any market at any time. Apple doesn’t want to survive the loss of 50 corporate IQ points because a dumbed-down Apple wouldn’t be Apple. Been there, done that back in the 1990s under Sculley, Spindler, and Amelio.
Apple would rather die than go back to those days.
And so it counts on its products succeeding on merit, not low prices. It likes it that old product categories die and new categories replace them. It likes not having moats because moats reward only sloppy behavior.
It likes products and customers more than defensible franchises.
If Apple had built a moat around the Apple II would there have been a Macintosh? If it had built a moat around the Macintosh would there have been an iPod, an iPhone, or an iPad?
Would there have been an Apple Watch?
Moats are for dummies.
Back to you, Warren.
Who owns your telephone number? According to Section 251(b) of the Communications Act of 1934, you own your number and can move it to the carrier of your choice. But who owns your texting phone number? It’s the same number, just used for a different purpose. The law says nothing about texting so the major wireless carriers (AT&T, Sprint, T-Mobile, and Verizon) are claiming that number is theirs, not yours, even if you are the one paying a little extra for unlimited texting. And the way they see it, unlimited is clearly limited, with carriers and texting services not offered by the Big Four expected soon to pay cash to reach you.
Those who’ll pay to text you include mobile carriers not in the Big Four led by the largest independent, US Cellular, as well as so-called over-the-top texting service providers that presently offer free texting services. These companies include pinger.com, textplus.com, textnow.com, textme.com, and heywire.com. Service continues for now but the incumbents are threatening to shut it down any day. T-Mobile started trying to impose fees several weeks ago and Sprint, I’m told, will start trying to charge next week.
Verizon shut off texting access to their network for two weeks starting April 3 as a shot across the bow of the over-the-top (OTT) carriers. Texting suddenly stopped working for OTT users, supposedly to limit spam texts. It quickly became documented that 98 percent of SMS SPAM was coming from ATT and T-Mobile SIM card fraud, which was not affected by the OTT cut-off, so the Verizon switch was turned back on, though OTT carriers were now on warning.
This is tied, by the way, to the emergence of a new business -- 800 texting. Want to report a problem to your cable company or check your bank balance? Send them a text. This was supposed to become a big business but now may not even start because the providers who make it possible are all OTT. The incumbent carriers don’t enable 800 texting because they don’t have the technology. This is their way of getting a piece of this new business.
For that matter, the carriers also didn’t have the ability to do billing for this type of service so they asked the two large US SMS aggregators (SAP & Syniverse) to track it on their behalf, which they have reportedly done. Now it’s just a matter of pulling the trigger.
Texting used to mean big bucks to mobile carriers back when it was charged a la carte and the fathers of teenager girls were getting bills for hundreds -- sometimes thousands -- of dollars per month. Unlimited texting and family plans have changed that for most of us. But the carriers miss that income and this gambit for control of our texting number may be an attempt to regain some of that old revenue, and some of the influence that has shifted in the past few years to non-SMS messaging apps such as Snapchat (raising money at huge reported valuations), Viber, acquired earlier this year for $900 million by Rakuten, and Whatsapp, acquired for $19 billion by Facebook. To the extent these services tie into SMS networks, they may feel some of the carrier’s controlling effect.
Apple can do texting with its iMessage, but that’s Apple and is dependent on its ownership of the operating system and the great deal it cut with carriers back in 2007. Google and Android don’t have a deal like Apple’s. So iOS comes out of this looking good.
The point of pain here is interoperability -- where OTT networks try to interface with legacy text networks which aren’t OTT. The FCC, which nominally regulates mobile phones, is nowhere to be found in this story because texting is considered neither voice nor data service under the Communications Act. Texting is signaling, which used to mean just setting-up and disconnecting calls but could include a short message. Unregulated, the carriers can charge what they like, which looks to be 2-3 cents per text. But to add another first, the big carriers are aiming to charge for both sending and receiving. Remember that even under the old regulated phone system it didn’t cost money to receive a call. So a transaction with your cable company involving a text and a response will incur a total of four charges -- two transmissions and two receptions costing someone up to 12 cents.
This is just plain-text we are talking about. Picture texting is another service, also unregulated by the FCC, and subject to its own set of carrier negotiations. If plain-text costs 3-12 cents, picture texting will cost more. In picture texting, too, Apple’s iMessage has an advantage.
OTT texting last year amounted to just under 100 billion messages so the big carriers are looking for up to $10 billion in new revenue (on top of $21 billion in existing text revenue). Internet startups will die or be sold if these changes go through and presumably the carriers will pick up those businesses. This does not bode well for innovation, though it bolsters somewhat text and chat services like SnapChat that don’t even attempt to deliver through the mobile phone’s native texting.
It is unclear how this story will play out. The FCC can do nothing unless someone complains and until you read this column nobody even knew to complain. So decide how you feel about it.
Photo Credit: iofoto/Shutterstock
Did you ever see the 1991 Albert Brooks movie Defending Your Life? A movie that clearly could not be made today because it includes neither super heroes nor special effects and isn’t a sequel, it’s about a schmo (Brooks) who dies only to find heaven has an entrance exam of sorts in which you literally defend your life. Well the other day I watched a very good TED talk by my friend Bob Litan in which he defended his entire profession -- economics. I know no braver man.
Few of us would defend our professions. I’m a journalist -- what is there to say about that except that being a Congressman is worse? Yet Bob Litan volunteered for this gig, which he does with remarkable energy for a guy the size of a meerkat.
Bob names names and shows us the huge effect specific modern economists have had on our technological lives -- everything from Internet dating to how to efficiently end ad auctions. A lot of this I didn’t know and you may not have known, either.
It’s a way of thinking far beyond Freakonomics that has changed the way we all live, whether we knew it or not.
How would you defend your profession?
"The step after ubiquity is invisibility," Al Mandel used to say and it’s true. To see what might be The Next Big Thing in personal computing technology, then, let’s try applying that idea to mobile. How do we make mobile technology invisible?
Google is invisible and while the mobile Internet consists of far more than Google it’s a pretty good proxy for back-end processing and data services in general. Google would love for us all to interface completely through its servers for everything. That’s its goal. Given its determination and deep pockets, I’d say Google -- or something like it -- will be a major part of the invisible mobile Internet.
The computer on Star Trek was invisible, relying generally (though not exclusively) on voice I/O. Remember she could also throw images up on the big screen as needed. I think Gene Roddenberry went a long way back in 1966 toward describing mobile computing circa 2016, or certainly 2020.
Voice input is a no-brainer for a device that began as a telephone. I very much doubt that we’ll have our phones reading brainwaves anytime soon, but they probably won’t have to. All that processing power in the cloud will quickly have our devices able to guess what we are thinking based on the context and our known habits.
Look at Apple’s Siri. You ask Siri simple questions. If she’s able to answer in a couple words she does so. If it requires more than a few words she puts it on the screen. That’s the archetype for invisible mobile computing. It’s primitive right now but how many generations do we need for it to become addictive? Not that many. Remember the algorithmic Moore’s Law is doubling every 6-12 months, so two more years could bring us up to 16 times the current performance. If that’s not enough then wait awhile longer. 2020 should be 4096 times as powerful.
The phone becomes an I/O device. The invisible and completely adaptive power is in the cloud. Voice is for input and simple output. For more complex output we’ll need a big screen, which I predict will mean retinal scan displays.
Retinal scan displays applied to eyeglasses have been around for more than 20 years. The seminal work was done at the University of Washington and at one time Sony owned most of the patents. But Sony, in the mid-90s, couldn’t bring itself to market a product that shined lasers into people’s eyes. I think the retinal scan display’s time is about to come again.
The FDA accepted 20 years ago that these devices were safe. They actually had to show a worst case scenario where a user was paralyzed, their eyes fixed open (unblinking) with the laser focused for 60 consecutive minutes on a single pixel (a single rod or cone) without permanent damage. That’s some test. But it wasn’t enough back when the idea, I guess, was to plug the display somehow into a notebook.
No more plugs. The next-generation retinal scan display will be wireless and far higher in resolution than anything Sony tested in the 1990s. It will be mounted in glasses but not block your vision in any way unless the glasses can be made opaque as needed using some LCD shutter technology. For most purposes I’d like a transparent display but to watch an HD movie maybe I’d like it darker.
The current resting place for a lot of that old retinal scan technology is a Seattle company called Microvision that mainly makes tiny projectors. The Sony patents are probably expiring. This could be a fertile time for broad innovation. And just think how much cheaper it will be thanks to 20 years of Moore’s Law.
The rest of this vision of future computing comes from Star Trek, too -- the ability to throw the image to other displays, share it with other users, and interface through whatever keyboard, mouse, or tablet is in range.
What do you think?
I had lunch last week with my old friend Aurel Kleinerman, an MD who also runs a Silicon Valley software company called MITEM, which specializes in combining data from disparate systems and networks onto a single desktop.
Had the Obama Administration known about MITEM, linking all those Obamacare health insurance exchanges would have been trivial. Given MITEM’s 500+ corporate and government customers, you’d think the company would have come to the attention of the White House, but no.
Lesson #1: before reinventing any wheels first check the phone book for local wheel builders
Aurel, who is Romanian, came to the USA in 1973 to get a PhD in Math at Cornell (followed later by an MD from Johns Hopkins) and learned English by watching the Senate Watergate hearings on TV. "They were on every night in the student center so I came early to get a seat close to the TV," he recalled. "That meant I also got to watch reruns from the original Star Trek series that seemed to always play right before the hearings. So I guess I owe my English skills to a combination of Watergate and Star Trek".
Our lunch discussion wasn’t about Watergate or Star Trek, but about supply and demand and how these concepts have changed in our post-industrial age.
"Demand drove supply in the industrial age," said Aurel. "You needed more steel to build cars so a new steel mill was built. But today it seems to me that supply is actually driving demand".
He’s right. Intrinsic to every technology startup company is an unmet need not on the part of the market but on the part of the founder. They want a device or a piece of software that doesn’t exist so they start a company to build it. Customers eventually appear, attracted by the new innovation, but it didn’t come about because they asked for it.
"You can’t rely on customers to tell you what to build," said Aurel. "They don’t know".
And so it has been for at least 30 years. When Lotus 1-2-3 was being developed in the early 1980s the developers were very proud of their macro functions, which they saw as a definite improvement on VisiCalc, the pioneering spreadsheet that dominated the market then. But when the Lotus marketing folks asked potential customers what improvements they’d like, not one mentioned macros. Yet when focus groups were shown the new software these same people declared macros a hit. "That’s what I want!" Only they hadn’t known it.
For that matter, VisiCalc itself didn’t come about because of customer demand: nobody back then knew they even needed a spreadsheet.
We see this effect over and over. Look at cloud computing, for example. It’s easy to argue that the genesis of cloud was Google’s desire to build its own hardware. Google was nailing motherboards to walls at the same time Excite (Google’s main search competitor at the time) was spending millions on Sun computers in a sleek data center. Google’s direction turned out to be the right one but that wasn’t immediately evident and might well have never happened had not Larry and Sergey been so cheap.
Extending this concept, Amazon’s decision to sell retail cloud services wasn’t based on demand, either. No entrepreneurs were knocking on Amazon’s door asking for access to cloud services. That’s not how it was done then. Every startup wanted their own data center or at least their own rack in someone else’s data center. Amazon embracing virtualization and shared processing changed everything, at the same time knocking a zero out of the cost for new software startups.
If there’s a lesson to be embraced here, then, it’s If we build it they might come.
Apple’s success has long been based on this principle. Nobody was demanding graphical computers before the Macintosh arrived (the three that preceded the Mac both failed). There were smart phones and tablets and music players before the iPhone, iPad and iPod, too, yet each Apple product stimulated demand by appealing not to what customers said they wanted but to what customers innately needed.
The big question -- make that the constant question -- is what will customers want next? What’s the next big platform, the next big innovation? I have some ideas about that I’ll share with you later in the week.
What do you think is coming next?
Photo Credit: alphaspirit/Shutterstock
Economist David Stockman, who is probably best known for being President Reagan’s budget director back in the era of voodoo economics, has been particularly outspoken about IBM as a poster child for bad policy on the part of the US Federal Reserve. How this would be isn’t immediately obvious but I think is worth exploring because IBM is far from the only company so afflicted. There’s an important effect here to be understood about corporate motivations and their consequences.
So I’ll begin with a story. Almost 40 years ago there was a study I worked on at Stanford’s Institute for Communication Research having to do with helping farmers in Kentucky be more successful by giving them access to useful government data. The study was sponsored by the United States Department of Agriculture (USDA) and it gave portable computer terminals to farmers along with access to databases at the USDA, National Oceanic and Atmospheric Administration, Department of Commerce, etc. The idea was that with this extra knowledge farmers would be able to better decide what crops to plant, when to plant them, when to harvest them, etc.
It didn’t work.
The farmers, even those farmers who made greatest use of the data, were no better as commercial farmers than the control group that had no special information or communication resources.
But this doesn’t mean they didn’t benefit from the data. Those farmers who used their terminals most found the data useful for speculating on commodities markets. These were hedging strategies to some extent but the best traders took them much further, significantly supplementing their farm income. They weren’t better farmers but they were better businesspeople.
It was an unintended consequence.
Now back to IBM. With the Great Recession of 2008 the Federal Reserve under then chairman Ben Bernanke lowered interest rates almost to zero in an attempt to make the recession less severe by spurring business spending to lead a return to growth.
This didn’t work, either. The recession dragged on and on but the companies that were expected to spend us back to better economic health didn’t do so. Traditional monetary policy said that given really cheap money companies would invest in their businesses.
Instead they tended to borrow money and invest in their own shares. At least that’s what IBM did.
And it’s easily understandable why IBM did this. Their cost of money has been about one percent. The dividend yield on IBM shares has been around two percent. On the basis of dividend savings alone it made sense to buy back and retire the shares as long as interest rates remained low. This Fed-driven stock arbitrage (and not IBM’s actual business) has been in large part behind the strength in IBM shares over the past several years.
There are several problems with this policy, however. For one thing, interest rates eventually go back up. They haven’t yet but eventually they will and at that point IBM may find itself selling shares to retire debt.
Another problem with this policy is that it decouples IBM’s stock from the reality of IBM’s operating businesses. Sales go down, operating profits go down, but earnings go up because there are fewer total shares. The longer this goes on and the more used to such unreality you become as a company the harder it is to get back to minding the business, which for the most part IBM hasn’t done.
The last problem with this policy is that money spent buying back shares does nothing to help the business, itself. Here’s an interesting video featuring former Intel CEO Craig Barrett where he explains around 41:25 into the video why it is, exactly, that companies can’t cut costs to get out of a recession -- that they have to spend their way out. "Invest their way out," Barrett says.
IBM, totally ignoring Barrett’s advice, has primarily tried to cut its way back to prosperity and it doesn’t work.
The result of all this is that IBM management has lost touch with reality. They no longer know what to do to save the company. I’m far from the only person saying this, by the way. Check out this clip from CNBC. It used to be only I was saying this stuff, now many pundits are catching-on.
Which brings us finally to IBM’s cloud strategy. Remember IBM’s future is supposedly based on mobile, cloud, and analytics -- mobile being the Apple/IBM deal I wrote about last week. Just in the last week or so IBM CEO Ginni Rometty has started to back away from cloud as the basis of IBM’s future and for good reason: it can’t work.
Cloud is an industry where prices are dropping by half every year and will continue to do so for the conceivable future. It’s an industry where the incumbents not only have very deep pockets, the biggest of them aren’t even reliant on cloud for their survival. Amazon is the cloud leader, for example, yet if their cloud business went under you’d hardly see it as a blip in the company financials -- it’s such a small part of Amazon’s business. So too with Google.
But what about IBM? Unlike these other companies, IBM has to actually make money on its cloud investments because it's told the world that will be the basis of much of its income moving forward. Except it won’t, because cloud computing has become a commodity and IBM has never been successful in a commodity business.
And then there is Microsoft. Microsoft, too, has said that it’s future is reliant on cloud success (Windows Azure). Microsoft has more money than IBM and a more motivated work force. Microsoft will do whatever it takes to win in the cloud. It has done it before, investing tens of billions to build, for example, the Xbox franchise which may possibly still be in the red. IBM doesn’t have that kind of patience, motivation, or deep pockets.
IBM will sell cloud services to their existing customer base at prices above the market right until those customers come to understand that IBM’s cloud isn’t any better than the others. Then the companies and governments will switch to cheaper providers and IBM will abandon the sector just as it has so many others (PCs, on demand, and now X-series servers, too).
But now we know it wasn’t Ginni’s fault for failing to understand her own business, It’s all the Fed’s fault for failing to anticipate an unintended consequence of its own policy -- that people would generally rather eat ice cram than make it.
Given that I used to work for Apple and have lately been quite critical of IBM, readers are wondering what I think of Tuesday’s announcement of an iOS partnership of sorts between Apple and IBM. I think it makes good sense for both companies but isn’t a slam dunk for either.
There are three aspects to this deal -- hardware, apps, and cloud services. For Apple the deal presents primarily a new distribution channel for iPhones and iPads. Apple can always use new channels, especially if they hold inventory and support customers who aren’t price-sensitive. Apple’s primary goal is to simply get more devices inside Big Business and this is a good way to do that.
The apps will all be developed by IBM but will still sell through the App Store and will have to meet Apple’s quality standards. I guarantee you meeting those standards will be a problem for IBM, but that’s not Apple’s problem. In fact as far as I can tell Apple has few if any resources deployed on the app side so for them it’s almost pure profit. Who can argue with that?
Cloud services for iOS are more complex and problematic. I’m doing a whole column shortly on IBM’s cloud strategy so I won’t go too deeply into it here, but let me point out a couple things. Apple has more data center space than does IBM, so it’s not like Cupertino needs IBM’s cloud capabilities. Apple is also a customer of Amazon Web Services, the largest cloud vendor of all. These facts suggest to me that this aspect of the deal is where fantasy hits reality. IBM wants iOS cloud services, not Apple. Big Blue dreams of iOS cloud dominance and they expect it will be fairly easy to accomplish, too. After all, they have a contract!
But to Apple the cloud services are just a necessary expense associated with getting device distribution to IBM’s customers. If no cloud services actually appear or if they do appear but are useless, Apple won’t care. Same, frankly, for the IBM apps.
This isn’t the first time Apple and IBM have worked together. In the dark days of John Sculley Apple created with IBM two software partnerships -- Taligent and Kaleida. Taligent was supposed to do an object-oriented and very portable operating system but ended up doing some useful development tools before being absorbed completely into IBM. Kaleida Labs did a CD-oriented media player that was superseded by the Internet and died within three years. I had friends who worked at both concerns and told me of the culture clashes between Apple and IBM.
This new partnership will turn out differently from those. Apple will sell a ton of iPads and iPhones and IBM will make some money from that. IBM business apps will be less successful but there may be a few that appear. iOS cloud services from IBM won’t happen. More on that tomorrow.
The result will be that Apple wins and IBM doesn’t lose, but neither company will be seriously affected by the other. It’s just not that big a deal.
Last week Microsoft CEO Satya Nadella took another step in redefining his company for the post-Gates/Ballmer era, sending a 3100-word positioning memo to every Microsoft employee and to the world in general. I found it a fascinating document for many reasons, some of them even intended by Nadella, who still has quite a ways to go to legitimately turn Microsoft in the right direction.
We’re seeing a lot of this -- companies trying to talk their way into continued technology leadership. Well talk is cheap, and sometimes that’s the major point: it can be far easier to temporarily move customers and markets through the art of the press release than by actually embracing or -- better yet -- coming up with new ideas. We’re at that point to some extent with this Nadella message, which shows potential but no real substance. But I think this was not written for customers so much as for the very employees it is addressed to.
It suggests to me a coming cultural revolution at Microsoft.
Read his message (the link is in the first paragraph) and I’ll wait for you to come back.
Wasn’t that slick? The major impression I got from this essay was that it had been highly polished, but then just look at Nadella with his perfect shirt and jacket and jeans. Tim Cook would love to look that good. The problem with polish is that it has to be underlaid with substance and this message is not, at least not yet.
Nadella begins at the altar of innovation, a word that at Microsoft has traditionally meant stealing technology. Of course he is the company cheerleader to some extent but Microsoft’s tradition of innovation is hard to even detect, much less celebrate or revive. This is revisionist history. Can he really believe it’s true?
He calls on Microsoft to rediscover its soul. I didn’t know Microsoft had lost its soul, though some might argue the company didn’t really have one. But if we can accept that Microsoft has in recent years started to play more fairly, is rediscovering its soul good or bad?
Nadella talks about "what only Microsoft can contribute to the world". I honestly don’t know what that means, do you?
His message is of course a repudiation of Steve Ballmer’s Devices and Services strategy and of Steve, himself, which I see as VERY important. We’ve changed coaches so to start winning again we’ll change the labels, too. Services are still in there, devices, too, but something’s different -- "mobile-first and cloud-first".
I don’t mean to be a pedant but which is it -- mobile first or cloud first? Only one thing can be first.
The gist of this is that Nadella intends to keep Microsoft important to personal and organizational productivity by emphasizing, it seems, the coordination of information in a world where users have multiple devices and there are a growing number of devices independent from any user -- the so-called Internet of things.
The obvious problem here is that for the first time in a long time Microsoft isn’t a leader in any of this. It is sometimes a strong player (and sometimes not, like in mobile) but is is not embracing what Bill Gates saw as the very essence of Microsoft -- establishing de facto standards. Windows is the top OS, but it’s pretty much ignored here. Same for Office. Xbox is a big success but it’s not the top game system and it hardly creates a de facto standard. Windows Server and .NET are solid players but not dominant in the old sense that Microsoft can threaten to pull a few APIs and destroy a developer’s world. Microsoft is just one of many companies in accounting and business intelligence.
The reality here that Nadella -- to his credit -- at least alludes to, is that the playing field is now level, the score is 0-0 (or more likely 0-0-0-0-0) and for Microsoft to win it will have to play hard, play fair, and win on its merits. And that’s what leads into the discussion of culture and how the company will do anything it must to succeed. "Nothing is off the table", Nadella wrote. This is the most important part of the message and, indeed, is probably the only part that really matters.
Microsoft has a shitload of money and thousands of good employees but it also has a corrosive management culture that tends to work against true innovation. Nadella’s biggest challenge is to change that culture. The next six months will be key. If it works, great. If it doesn’t, then there will be another CEO and another plan. Microsoft can probably afford to blow it another time or two, but that’s all. Good luck Satya. Time marches on.
What do you think Microsoft is doing?
The US Marshals Service doesn’t normally make economic policy but this week they apparently did so by auctioning 30,000 Bitcoins, a crypto currency I have written about before. This auction effectively legitimizes Bitcoins as part of the world economy. Am I the only one to notice this?
My first column on this subject was a cautionary tale pointing out the two great areas of vulnerability for Bitcoin: 1) the US Government might declare Bitcoins illegal, and; 2) someone might gain control of a majority of Bitcoins in which case their value could be manipulated. While number two is still theoretically possible it becomes less likely every day. And number one seems to have been put to rest by the U.S. Marshals.
The Marshals dispose of property confiscated by federal authorities, giving the proceeds to the Treasury Department and back to the Department of Justice. This auction -- worth about $11 million -- could not have happened without the permission of both agencies (Treasury and Justice). This is why I can claim that economic policy has been made, because it was authorized by the very agencies normally responsible for such policies.
When drug dealers lose their helicopters, Swiss watches, and Cigarette speed boats, the US Marshals sell them for money. However the Marshals don’t sell confiscated drugs because drugs are illegal and are destroyed. If Bitcoins were illegal they, too, would have been destroyed, not sold. Hence Bitcoins are not illegal.
And why should they be, really? Between derivative securities, futures and options there are already plenty of financial instruments that don’t look much like money to me but are treated as such.
Now, even though nobody has yet announced it, we can add Bitcoins to that list.
And who bought all those Bitcoins? Third generation VC Tim Draper, saying he’ll use the coins as a currency hedge.
In theory this threatens the US dollar‘s role as the reserve currency, but that theory is pretty weak with a finite number of Bitcoins even possible and most users still thinking of it as anonymous dollars.
It’s this anonymous nature of Bitcoins that has me puzzled. My guess is they aren’t anonymous at all and the NSA has thrown its elves into de-cyphering Bitcoin metadata. Otherwise this auction would never have happened.
Where is Edward Snowden when we need him?
"All politics is local," said House Speaker Tipp O’Neill, meaning that every politician has to consider the effect that his or her positions will have on voters. What makes perfect sense on a national stage might be a disaster back in the district, where the actual voters live. And so it is, too, with big companies, where local impact is sometimes more important than national or international. Sometimes, in fact, companies can be completely re-routed solely to please or affect a single executive. I believe we are seeing precisely that right now at Google concerning Google X.
Google X is that division of the search giant responsible for self-driving cars, Google Glass, and the prospect of hundreds or thousands of balloons floating through the stratosphere bringing Internet service to grateful Polynesians. It’s those balloons, in fact, that led me to this topic.
At Google they call Google X projects moon shots, the idea being that they are multi-year efforts leading toward disruptive innovations and new markets for Google to dominate. Or not…
Moon shots I get, but balloons? Balloons I don’t get. Anyone who proposes to bring Internet service to the Third World by setting hundreds of balloons adrift is simply crazy. It’s not impossible, just stupid. The goal is laudable but there are several better ways to achieve it than leaving network coverage up to the prevailing winds. Little satellites are far better than balloons, for example, and probably cheaper, too.
Why would Google X spend what are likely tens of millions on something as crazy as balloons? I think it is because the real output of Google X isn’t progress, it’s keeping Google co-founder Sergey Brin out of CEO Larry Page’s hair.
"Balloons? Heck of an idea, Sergey. Go for it!"
Google is a typical high tech success story in that the very young founders brought in professional managers to help them grow the company and learn how to be leaders and -- most importantly -- take Google public, securing their personal fortunes. For Google that leadership came from Eric Schmidt just as at Yahoo it came from Tim Koogle and at Microsoft it came from Jon Shirley. Michael Dell did the same thing to help smooth the way for Dell’s IPO. And like all these others he then quickly pushed aside the older managers once his company’s fortunes were preserved.
That’s what Larry Page did with Eric Schmidt, who continues to nominally work for Google and earns a boatload of money, but in fact mainly serves as a G-V-flying global statesman for a company that may not actually need one.
"Global statesman? Heck of an idea, Eric. Go for it!"
With Schmidt out of the way Page was still stymied in his quest for total executive power by Brin, who has just as much stock and just as many votes. How to keep Sergey distracted from the day-to-day?
Google X!
Invent the future, change the world, spend $2 billion per year. Heck, something may even come of it. But if nothing does that’s okay, too, because $2 billion is a low price to pay for executive stability in Mountain View.
Ironically this might actually make it more -- not less -- likely that something useful will eventually come out of Google X. That’s simply because Sergey can continue spending for as long as he wants, making truly long-term projects viable in a world where all other R&D has to pay off in a year or less.
So I love Google X as do all the other reporters who constantly need something wacky to write about. But let’s not pretend Google X is what it’s not.
Heck of an idea! Go for it!
My book, The Decline and Fall of IBM, is now available in paperback, on the iPad and Nook, as well as on the Kindle. A dozen other platforms plus an audio book will be available shortly, but these are the big ones.
Over the weekend I received a very insightful message about the book from reader Steve Jenkins in Australia, where IBM is showing the same behavior problems as everywhere else. Steve has an insight into Big Blue that I wish I had thought to include in the book because I believe he is absolutely correct.
"Finished your e-book, but skimmed the blog comments," wrote Steve. "The ‘Financial Engineering’ is important: the C-suite is converting 15 percent of the company into share ‘value’ each year to feed their bonuses. Is that a 6-year half-life? It’s a technique with a limited application, but could work well for a decade.
"Reading your analysis, I was wracking my brain trying to think of precedents, in the Industry and really couldn’t. Not even Unisys. But there is one, it’s glaring, obvious and should be frightening to shareholders & customers: Communist USSR & the Eastern European Bloc.
"Same deal, I think. Cooked books, pain for the working classes, luxury and riches for the Ruling Elite, Ethics & Morals of no consequence. And those ‘Five Year Plans’ are the same: full of certainty and false promise, based on measuring and rewarding the wrong things (like tons of nails made) -- with no concern, checking or consequences for gaming the system.
"You absolutely nailed one of the Cultural problems: 'The Big Bet'.
"I don’t think IBM’s Board & C-suite is in denial. I suspect it’s more like the Kremlin in 1988 and I don’t have good words for it. Fantasy Land, definitely. Disconnected from Reality, certainly. Denial implies some recognition or understanding of the truth".
Thanks for the insight, Steve. Yes, IBM in 2014 is the Kremlin in 1988 with the big question being whether Ginni Rometty is Mikhail Gorbachev? I don’t think she is. Gorbachev at least took a shot at trying to manage the transition which Ginni so far has not. That would make her Gorby’s predecessor, Konstantin Chernenko, who died in office.
I suspect that will be Ginni’s fate, too, at least metaphorically. As more people read my book and come to understand not just the house of cards that IBM has become but that it’s just an extreme example of what’s happening all over American big business, IBM’s board will come to life and try to save itself by ejecting her in favor of a more Gerstner-like character.
Heck, Rometty’s replacement could be Lou Gerstner.
My next column (the last in this series I’m sure you’ll be happy to know) is about the probable endgame for IBM.
Well my IBM eBook is finally available. Right now that’s just on Amazon.com for the Kindle (just click the link to the right) but by next week it will be on every eBook platform (iPad, Nook, etc.) and there will be a trade paperback as well as an audio edition. I’ll announce all of those here as they appear.
I feel I owe an explanation for the long delay in publishing this book. I finished it in early January, about a week after my mother died, only to learn that my old-school book publisher didn’t want to touch it. Or more properly they wanted me to be entirely devoted to the book they were paying me a ton of money to write and to wait on IBM even though the eBook had been in the works for two years and was completely ready-to-go.
What would you do?
I tried everything -- everything -- to publish this book without being sued for breach of contract. The final solution was to pay back all the money and walk away from the big book deal. I’ll still finish that book but I’ll have to find a new publisher or do it myself like I have with this one.
Though originally finished in January the manuscript has been updated and revised every month since and is up-to-date as of last week.
Here’s the introduction. If it sounds interesting please order one for all your friends. It’s cheap at $3.99 for 293 pages and having written that big check I must sell close to 100,000 copies just to break-even.
Thanks for waiting.
Introduction
The story of this book began in the summer of 2007 when I was shooting a TV documentary called The Transformation Age -- Surviving a Technology Revolution, at the Mayo Clinic in Rochester, Minnesota. Rochester has two main employers, Mayo and IBM, and a reporter can’t spend several days in town without hearing a lot about both. What I heard about IBM was very disturbing. Huge layoffs were coming as IBM tried to transfer much of its U.S. services business directly to lower-cost countries like India and Argentina. It felt to me like a step down in customer service, and from what I heard the IBMers weren’t being treated well either. And yet there was nothing about it even in the local press.
I’ve been a professional journalist for more than forty years and my medium of choice these days is the Internet where I am a blogger. Bloggers like me are the 21st century version of newspaper beat reporters. Only bloggers have the patience (or obsessive compulsive disorder) to follow one company every day. The traditional business press doesn’t tend to follow companies closely enough to really understand the way they work, nor do they stay long enough to see emerging trends. Today business news is all about executive personalities, mergers and acquisitions, and of course quarterly earnings. The only time a traditional reporter bothers to look -- really look -- inside a company is if they have a book contract, and that is rare. But I’ve been banging away at this story for seven years.
Starting in 2007, during that trip to Minnesota, I saw troubling things at IBM. I saw the company changing, and not for the better. I saw the people of IBM (they are actually called "resources") beginning to lose faith in their company and starting to panic. I wrote story after story, and IBM workers called or wrote me, both to confirm my fears and to give me even more material.
I was naive. My hope was that when it became clear to the public what was happening at IBM that things would change. Apparently I was the only member of the press covering the story in any depth -- sometimes the only one at all. I was sure the national press, or at least the trade press, would jump on this story as I wrote it. Politicians would notice. The grumbling of more than a million IBM retirees would bring the story more into public discourse. Shamed, IBM would reverse course and change behavior. None of that happened. This lack of deeper interest in IBM boggled my mind, and still does.
Even on the surface, IBM in early 2014 looks like a troubled company. Sales are flat to down, and earnings are too. More IBM customers are probably unhappy with Big Blue right now than are happy. After years of corporate downsizing, employee morale is at an all-time low. Bonuses and even annual raises are rare. But for all that, IBM is still an enormous multinational corporation with high profits, deep pockets, and grand ambitions for new technical initiatives in cloud computing, Big Data analytics, and artificial intelligence as embodied in the company’s Jeopardy game-show-winning Watson technology. Yet for all this, IBM seems to have lost some of its mojo, or at least that’s what Wall Street and the business analysts are starting to think.
Just starting to think? The truth is that IBM is in deep trouble and has been since before the Great Recession of 2008. The company has probably been doomed since 2010. It’s just that nobody knew it. These are harsh words, I know, and I don’t write them lightly. By doomed I mean that IBM has chosen a path that, if unchanged, can only lead to decline, corporate despair, and ultimately insignificance for what was once the mightiest of American businesses.
If I am correct about IBM, whose fault is it?
In its 100 years of existence, the International Business Machines Company has had just nine chief executive officers. Two of those CEOs, Thomas J. Watson, and his son, Thomas J. Watson Jr., served for 57 of those 100 years. Between father and son they created the first true multinational computer company, and defined what information technology meant for business in the 20th century. But the 20th century is over and with it the old IBM. In the current century, IBM has had three CEOs: Louis V. Gerstner Jr., Samuel J. Palmisano, and Virginia M. Rometty. They have redefined Big Blue, changing its personality in the process. Some of this change was very good, some of it was inevitable, but much of it was bad. This book is about that new personality and about that process.
Lou Gerstner saved IBM from a previous crisis in the 1990s, and then went on to set the company up for the crisis of today. Gerstner was a great leader who made important changes in IBM, but didn’t go far enough. Worse still, he made a few strategic errors that helped the company into its current predicament. Sam Palmisano reversed some of the good that Gerstner had done and compounded what Gerstner did wrong. The current crisis was made inevitable on Palmisano’s watch. New CEO Ginni Rometty will probably take the fall for the mistakes of her predecessors. She simply hasn’t been on the job long enough to have been responsible for such a mess. But she’s at least partly to blame because she also hasn’t done anything -- anything -- to fix it.
We’ll get to the details in a moment, but first here is an e-mail I received this January from a complete stranger at IBM. I have since confirmed the identity of this person. He or she is exactly as described. Some of the terminology may go over your head, but by the end of the book you’ll understand it all. Read it here and then tell me there’s nothing wrong at IBM.
"Please keep this confidential as to who I am, because I’m going to tell you the inside scoop you cannot get. I am rated as a #1. That’s as high as you go, so calling me a disgruntled employee won’t work.
"Right now the pipeline is dry -- the number of services folks on the bench is staggering and the next layoff is coming. The problem now is that the frequent rebalancing has destroyed morale, and so worried troops don’t perform well. Having taken punitive rather than thoughtful actions, Ginni has gutted the resources required to secure new business. Every B-School graduate learns not to do that. The result is a dry pipeline, and while you can try to blame the cloud for flagging sales, that doesn’t work. Those cloud data centers are growing. The demand for hardware didn’t shrink -- it simply moved. Having eliminated what did not seem necessary, the brains and strategy behind the revenue are now gone, leaving only ‘do now’ perform people who cannot sell. Sales reps have no technical resources and so they cannot be effective. Right now we cannot sell. There is no one to provide technical support. The good people are finding jobs elsewhere. The [job] market outside IBM is improving. I am interviewing at a dozen companies now. Soon as I find something perfect for me, I’m gone. They don’t expect people like me to leave.
"Ever work anywhere where you were afraid to make a large purchase like a car because you don’t know that you will have a job in a month? That’s how everyone feels at IBM. Now we are doing badly on engagements. I cannot think of a single engagement where we are not in trouble. We lay off key people in the middle of major commitments. I cannot tell you how many times I’ve managed to get involved in an engagement and cannot lay my hands on the staff required to perform.
"The whole idea that people in different time zones, all over the world can deliver on an engagement in Chicago is absurd.
"Lastly, using the comparative scheme for employee evaluations is simply stupid. No matter how great the entire staff is, a stack ranking will result in someone at the top and someone at the bottom. It ignores that the dead wood is gone.
"Ginni has made one horrible mistake. Sam, and now, Ginni, has forgotten that IBM was made by its people. They have failed to understand their strongest assets, and shortly will pay for that. IBM just hit the tipping point. I do not think there is any way back".
Years ago IBM could sell an idea. They’d come in, manage a project, develop an application, and it would make a big difference to the customer. IBM would generally deliver on their promises, and those benefits more than paid for the high cost of the project and the computers. IBM transformed banks by getting them off ledger books. Remember the term "bankers’ hours"? Banks were only open to the public for part of the day. The rest of the time was spent with the doors closed, reconciling the transactions.
But that was then and this is now. IBM’s performance on its accounts over the last 10 years has damaged the company’s reputation. Customers no longer trust IBM to manage projects well, get the projects finished, or have the projects work as promised. IBM is now hard pressed to properly support what they sell. Those ten years have traumatized IBM. Its existing businesses are under performing, and its new businesses are at risk of not succeeding because the teams that will do the work are damaged.
Let’s look to the top of IBM to understand how this happened.
"The importance of managers being aligned with shareholders -- not through risk-free instruments like stock options, but through the process of putting their own money on the line through direct ownership of the company—became a critical part of the management philosophy I brought to IBM," claims former CEO Lou Gerstner in his book, Who Says Elephants Can’t Dance?
Defying (or perhaps learning from) Gerstner, IBM’s leaders today are fully isolated and immune from the long-term consequences of their decisions. People who own companies manage them to be viable for the long term. IBM’s leaders do not.
I am not going to explain in this introduction what is wrong with IBM. I have the whole book to do that. But I do want to use this space to explain why a book even needs to be written and how I came up with that provocative title.
The book had to be written because writing the same story over and over for seven years hasn’t changed anything. The only possible way to still accomplish that, I figured, was to put all I know about IBM in one place, lead readers through the story, and at the end take a shot at explaining how to actually fix IBM. The last chapter goes into some detail on how to get IBM back on course. It isn’t too late for that, though time is growing short.
The title is based on Edward Gibbon’s The History of the Decline and Fall of the Roman Empire, published in six volumes beginning in 1776. Decline and Fall was the first modern book on Roman history. It was relatively objective and drawn from primary sources. And it recounted the fall of an empire some thought would last forever. In industrial terms many people thought the same about IBM.
Gibbon’s thesis was that the Roman Empire fell prey to barbarian invasions because of a loss of virtue. The Romans became weak over time, outsourcing the defense of their empire to barbarian mercenaries who eventually took over. Gibbon saw this Praetorian Guard as the cause of decay, abusing their power through imperial assassinations and incessant demands for more pay.
The Praetorian Guard appears to be in charge these days at IBM, as you’ll see.
Even a self-published book with one author is the product of many minds. Katy Gurley and Michael McCarthy were my editors. Kara Westerman copy-edited the book. Lars Foster designed the cover. Many loyal IBMers gave me both information and the benefit of their wisdom to make the book possible. Out of necessity, because quoting them directly would imperil their jobs, these heroes must go unnamed.
We can all hope their assistance will not have been in vain.
Revisionist history is looking back at past events in light of more recent information. What really happened? And no recent source of information has been more important when it comes to revising the history of digital communications than former National Security Agency (NSA) contractor Edward Snowden. Today I’m really curious about the impact of the NSA on the troubled history of Ultra Wide Band (UWB) communication.
I stumbled on this topic with the help of a reader who pointed me at a story and then a paper about advances in secure communication. Scientists at the University of Massachusetts came up with a method of optical communication that they could mathematically prove to be immune to snooping or even detection up to a certain bit rate. To an eavesdropper who didn’t know what to listen for or when to listen for it the communication just looks like noise.
"Who knew that people were actually thinking about replacing IP communications?" asked my reader. "This paper is interesting in that point-to-point communications uses Layer-1 for the entire signal. I had been thinking Layer-2, but here is secure Layer-1 communications".
But it looked a lot like Ultra Wide Band to me.
Many readers won’t remember, but UWB was a big story about 12 years ago. I wrote a couple of columns on the subject back then and it looked very promising. The venture capital community thought so too, putting about $1 billion into a number of UWB startups, all of which to my knowledge eventually failed. But why did they fail?
UWB would not have replaced IP communications in that senders and receivers might still have used IP addresses to identify themselves if they chose to do so, but it promised to replace nearly all of the machinery inside big chunks of Internet, bringing secure multi-gigabit wireless communications to LANs and WANs alike.
The UWB startup that got the most press back then was called Time Domain and the name says a lot about how the technology worked. Rather than using specific frequencies UWB transmitted on all frequencies at the same time. The key was knowing when and where in the frequency band to expect a bit to appear. Two parties with synchronized clocks and codebooks could agree that at 10 nanoseconds after the hour at a certain frequency or range of frequencies a bit would appear if one was intended. The presence of that signal at that time and place was a 1 and the absence was a 0. But if you didn’t know when to listen where -- if you weren’t a part of the conversation -- it all looked just like noise.
There was a San Diego startup called PulseLINK that came up with the idea to try UWB not only through the air but also over wires. They reasoned that RF traveled through copper as well as it traveled through air. So they injected their UWB signal into the local cable TV system (without permission of course) just to see what would happen. Could they establish point-to-point and point-to-multipoint communications as an Over-The-Top network on the local cable plant? It worked. They created a gigabit network atop established cable infrastructure without the cable company even noticing it was happening.
One fascinating aspect of the PulseLINK test was that UWB, which is an electrical signal, was able to propagate throughout the Cox cable plant even though sections of the Cox network used optical signaling. They went from electrons to photons back to electrons again and the fact that was even possible came down to the fact that it was CATV, not IP. Had the cable system been IP-based every electrical to optical conversion point would have involved capturing packets, fixing them as needed, then retransmitting them, which would have foiled UWB. But in the rotgut world of cable there were no packets -- just an analog signal carrying digital and analog data alike which was block converted from one medium to another and back. So the UWB data was converted and retransmitted throughout the cable plant as noise.
UWB was controversial and there were many RF engineers who opposed it, arguing that it would hurt standard communications and even take down the GPS system.
But UWB had many other people excited not only because it promised huge bandwidth increases but because it would enable communication with places previously unreachable, like submarines deep underwater. It offered not only wireless communication with miners underground but could support a type of radar to identify where underground the miners were located in case of an accident. UWB was similarly going to revolutionize communication among firefighters showing where they were inside burning buildings. The same, of course, could be done for soldiers in battle.
And all of this would have been essentially immune to eavesdropping. No man in the middle.
So what happened? The FCC was asked to approve UWB and did so, but with extreme power limitations, turning what would have been a WAN into not even a LAN but a PAN -- a Personal Area Network -- with a range of 10 meters or less. UWB, which had been intended to compete with every networking technology ended up a competitor only with Bluetooth. And with its vastly higher computational complexity UWB couldn’t economically compete with Bluetooth, and so all the UWB startups died.
Or did they?
The FCC decision was a shock to all involved, especially to the VCs who had ponied-up those hundreds of millions of dollars. Yet nobody protested much, no bills were offered in Congress pushing the FCC to change its mind. Just move along people, nothing to see here.
The FCC said in its 2002 rule that they would revisit UWB but never really did.
The FCC’s argument was that successful UWB communication would have raised the noise threshold for other kinds of RF communication. This is true. But I never saw any studies showing that it would have kept anyone from listening successful to Top-40 radio. The pulses were so short as to be undetectable by legacy equipment. It always seemed to me the FCC rationale was too thin, that they killed UWB too easily. But what did I know?
The better question today, I think, is what did Edward Snowden know? Given what we now know of the NSA’s broad interference in the networking business they would have opposed a new technology that would have defeated all their snooping work to date. Of course the NSA hated UWB. And the Department of Defense and the Central Intelligence Agency would have loved it. Here was the possibility of a truly private global channel if they could keep it to themselves.
This is all just speculation on my part, but Snowden got me thinking. Did the NSA help the FCC to kill UWB? That’s exactly the kind of idea that would have appealed to the second Bush Administration. Did the US intelligence and defense communities pick up UWB to advance their own secret communication capabilities while silencing any VC outrage by covering some of their losses?
That’s what I would have done were I Dick Cheney.
Image Credit: Elena Schweitzer/Shutterstock
There’s a peering crisis apparently happening right now among American Internet Service Providers (ISPs) and backbone providers according to a blog post this week from backbone company Level3 that I am sure many of you have read. The gist of it is that six major ISPs of the 51 that peer with Level3 have maxed-out their interconnections and are refusing to do the hardware upgrades required to support the current level of traffic. The result is that packets are being dropped, porn videos are stuttering, and customers are being ill-served. I know exactly what’s going on here and also how to fix it, pronto.
The problem is real and Level3′s explanation is pretty much on target. It’s about money and American business, because this is a peculiarly American problem. Five of the six unnamed ISPs are American and -- given that Level3 also said they are the ones that typically get the lowest scores for customer service (no surprise there, eh?) we can guess at least some of the names. According to the American Customer Satisfaction Index’s 2013 report (the latest available with a new one due any day now) the worst ISPs in America are -- from worst to less bad but still lousy -- Comcast, Time Warner Cable, CenturyLink, Charter Communications, AT&T U-verse, Cox Communications, and Verizon FiOS. That’s seven companies and since Level3 says only five are creating this peering problem then two in there are off-the-hook but still not the best at what they do.
The idea here is pretty clear: these five ISPs want to be paid extra for doing the job they are already being paid for. Extra ports are required to handle the current level of traffic and these companies are assuming that when the pain becomes great enough -- that’s our pain, by the way -- Level3 or some Level3 customer like Netflix will pay the extra money to make the problem go away.
This ties into the current Net Neutrality debate and the new FCC rules that Chairman Tom Wheeler says he’ll be offering-up later this month that will both keep the playing field level while somehow allowing for a version of fast lane service. I already have doubts about Chairman Wheeler’s proposed rules.
Let’s understand something: Internet service is an extremely profitable business for the companies that provide it. Most on this notorious list are cable TV companies and generally they break even on TV and make their profit on the Internet because it costs so little to provide once the basic cable plant is built. So what these five are saying, if Level3 is on the level, is that the huge profits they are already making on Internet service just aren’t quite huge enough.
I’d call this greedy except that Gordon Gecko taught us that greed is good, remember, so it must be something other than greedy.
It’s insulting.
So here is my solution to the problem. I suggest we look back to the origin of peering, which took place in the dim recesses of Internet history circa 1987. Back then the Internet was owned and run by the National Science Foundation and was called NSFnet. Lots of backbone providers served NSFnet and also built parallel private backbones that were generally built from T1 (now called DS1) connections running at 1.5 megabits-per-second. Most backbone links today are 10 gigabits-per-second and there are often many running in parallel to handle the traffic. Back in the NSFnet days peering was done at a dozen or so designated phone company exchange points in places like Palo Alto and San Diego where backbone companies would string extra Ethernet cables around the data centers connecting one backbone with another. That’s what peering meant -- 10 meters or less of cable linking one rack to another. Peering was cheap to do.
Peering also made for shorter routes with fewer hops and a generally lower load for both backbones involved, so it saved money. Nobody paid anybody for the service because it was assumed to be symmetrical: as many bits were going in one direction as in the other so any transaction fees would be a wash. Most peering remains free today with Level3 claiming that only three of its 51 peers are paying (or are being paid, it isn’t clear which from the post).
The offending ISPs are leaning on the idea that with Content Distribution Networks for video from Netflix, YouTube, Amazon, and Hulu, the traffic is no longer symmetrical. They claim to be getting more bits than they are giving and that, they say, is wrong.
Except it’s actually right (not wrong) because those bits are only coming because customers of the ISPs -- you and me, the folks who have already paid for every one of those bits -- are the ones who want them. The bits aren’t being forced on the ISPs by Netflix or Level3, they are being demanded from Netflix and Level3 by we, the paying customers of these ISPs.
The solution to this problem is simple: peering at the original NSFnet exchange points should be forever free and if one participant starts to consistently clip data and doesn’t do anything about it, they should be thrown out of the exchange point.
Understand that where there were maybe a dozen exchange points 25 years ago, there are thousands today, but if a major ISP or backbone provider doesn’t have a presence at the big old exchange points -- that original dozen -- well they simply can’t claim to be in the Internet business.
These companies are attempting to extort more millions from us just to provide the service we have already paid for.
I say throw the bums out.
I came across this news story today in which a Russian space official suggests the US consider using trampolines to get astronauts and supplies to the International Space Station. It’s all about economic sanctions applied to Russia over its annexation of Crimea and other meddling in Ukraine. The Russian space agency, you see, has been hard hit by the cancellation of at least five launches. Except according to my friends in the space biz Russia hasn’t been hurt at all.
Space customers pay in advance, way in advance. All five canceled NASA launches were paid for long ago and the same for a number of now-delayed private launches. They may go ahead or not, it’s hard to say. But nobody in Russia is losing sleep over the problem because the space agency will actually make more money keeping the launchers on their pads than by firing them.
In time the sanctions may have some effect, but not for at least another year. At present the only people being hurt by these particular sanctions are Americans.
This is not to say the sanctions aren’t worth doing. Maybe they will be key to achieving US policy objectives. But they aren’t as they seem.
We all have friends (people we know) and friends (people we not only know but hang out with). Maybe the better contrast might be between friends and buddies. Well Avram Miller is one of my buddies. He lives down the road from me and my kids prefer his pool to ours because his is solar heated. The retired Intel VP of business development is quite a character, knows a lot of people who know people, and understands the business of technology at a level few people do. So when he wrote a post this morning predicting that Apple will clean Google’s clock in search, I sat up in my chair.
Avram’s thesis is that Steve Jobs felt betrayed by Google’s development of Android and decided years ago to go after the soft underbelly of the Googleplex by building a superior search product called Found that Apple would have no need to monetize -- the Switzerland of search. Please read Avram’s post and you’ll see he claims that Steve Jobs even pre-recorded his participation in the Found launch event scheduled for sometime next year. Which of course makes me wonder what else Steve may have prerecorded?
I believe Avram. We haven’t yet discussed this directly because Avram has spent the winter in Israel but that’s what makes this post so plausible. If there’s an Israeli scientist at the heart of Found, then Avram -- who has been the toast of the Tel Aviv tech scene all season -- would probably have bumped into him or her.
I love the Apple side of this but what gives it real import are the Google and Facebook aspects. Facebook has pivoted deftly to mobile, Google hasn’t particularly succeeded in social networking with Google+, so Google is more vulnerable than one might think. I’m not sure Avram is right that Zuckerberg & Company are the major threat, but if Apple can out-Bing Bing without needing the ad revenue, well Steve Jobs may well get his revenge on Google after all. As I guess he will be explaining to us sometime next year.
My last column discussed the intersection between Big Data and Artificial Intelligence and where things might be heading. The question for this column is can I (Bob Cringely) be replaced by a machine?
Look below the fold on most news sites and you’ll see ads that look like news stories but aren’t: "One Weird Trick to Grow Extra Toes!", or "The 53 Hottest Ukrainian Grandmothers!" I’m waiting for "One Weird Trick to Becoming a Hot Ukrainian Grandmother with Extra Toes!" Read the stories and they are total crap, that is unless you have a fetish for Ukrainian Grandmas… or toes. They are all about getting us to click through page after page and be exposed to ad after ad. Alas, in SEOWorld (the recently added 10th level of Hell) some people call this progress.
And if you actually read these stories rather than just look at the pictures, you’ll note how poorly written they are -- poorly written but obviously written by humans because machines wouldn’t make the mistake of repeating whole paragraphs, for example.
Lately I’ve noticed the writers are adding a lot of opinions. In a story on 40 Examples of Botched Celebrity Cosmetic Surgery, they seem to have gathered a collection of celebrity shots without regard to whether surgery actually happened then simply surmise: "It looks to me like Jennifer Aniston has had some work done here, what do you think?"
If they can get us to comment of course it makes that page move up in the Google ranking.
This too shall pass. Years ago my friends Mario Fantoni and James Kowalick came up with a way to massively increase the readership of print and online display ads by throwing crazy images into the pictures. You remember those ads, which tended to feature things like babies dressed-up as honey bees appearing to fly through a furniture ad. The crazy images got people to look closer at the ads. For awhile many ads used this technique, which eventually died away just as quickly because we became immune. And so will "One Weird Trick to Get You to Look at 17 Pictures!" It’s a fad.
What isn’t a fad is writing original material that’s thought-provoking and gets people to respond, which is what I try to do here. The next step in Internet content, then, might be automating me. And I am sure it is coming.
Gmail recently altered its Terms of Service, making it absolutely clear that it does read our email, thank you, and a lot more. The part I found especially disturbing is Google’s assertion that my using Gmail gives it the right to produce "derivative works".
Under the Gmail Terms of Service Google can legally go through the 100,000+ messages sitting in my IN and SENT boxes and use that content to generate new columns. I have accessible right now online more than 1200 columns and stories totaling more than 1,000,000 words. What’s to keep Google (or anyone else for that matter) from beating the Hadoop out of that content to come up with an algorithm for generating Cringely columns? Then use all the material in my Gmail (Gmail hosts cringely.com email) as fodder for those new columns?
If Google Vision can recognize kittens after 72 hours of ab initio crunching there’s no way Google couldn’t find a way to generate columns like this one, especially if I’ve been gathering the material for them.
It’s not ego or paranoia driving my thinking here, it’s economics. You and I know that what you are reading right now is essentially worthless. But Wall Street doesn’t see it that way based on recent sales of online media properties. On a per-reader basis this rag would appear to be worth several million dollars. It’s not of course, but we’re smart and Wall Street is stupid. Or, more properly, Wall Street is automated.
Based on these comps I recently complimented Om Malik on becoming a billionaire.
Someone shortly will put together all these components and start to generate news from nothing. Well not from nothing because inspiration has to come from somewhere, but it’s happening. It’s just a matter of the idea becoming interesting enough to some PhD at Google or Yahoo or even Microsoft -- any outfit with access to our intimate secrets and the right to make derivative works based on that material.
Personally I welcome the competition. But I’m also moving my email back in-house, immediately.
This is the first of a couple columns about a growing trend in Artificial Intelligence (AI) and how it is likely to be integrated in our culture. Computerworld ran an interesting overview article on the subject yesterday that got me thinking not only about where this technology is going but how it is likely to affect us not just as a people. but as individuals. How is AI likely to affect me? The answer is scary.
Today we consider the general case and tomorrow the very specific.
The failure of Artificial Intelligence. Back in the 1980s there was a popular field called Artificial Intelligence, the major idea of which was to figure out how experts do what they do, reduce those tasks to a set of rules, then program computers with those rules, effectively replacing the experts. The goal was to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didn’t know ourselves.
It didn’t work.
Artificial Intelligence or AI, as it was called, absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. Though it wasn’t clear at the time, the problem with AI was we just didn’t have enough computer processing power at the right price to accomplish those ambitious goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.
The human speed bump. It’s ironic that a key idea behind AI was to give language to computers yet much of Google’s success has been from effectively taking language away from computers -- human language that is. The XML and SQL data standards that underly almost all web content are not used at Google where they realized that making human-readable data structures made no sense when it was computers -- and not humans -- that would be doing the communicating. It’s through the elimination of human readability, then, that much progress has been made in machine learning.
You see in today’s version of Artificial Intelligence we don’t need to teach our computers to perform human tasks: they teach themselves.
Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. This statistical translator uses billions of word sequences mapped in two or more languages. This in English means that in French. There are no parts of speech, no subjects or verbs, no grammar at all. The system just figures it out. And that means there’s no need for theory. It works, but we can’t say exactly why because the whole process is data driven. Over time Google Translate will get better and better, translating based on what are called correlative algorithms -- rules that never leave the machine and are too complex for humans to even understand.
Google Brain. At Google they have something called Google Vision that currently has 16000 microprocessors equivalent to about a tenth of our brain’s visual cortex. It specializes in computer vision and was trained exactly the same way as Google Translate, through massive numbers of examples -- in this case still images (BILLIONS of still images) taken from YouTube videos. Google Vision looked at images for 72 straight hours and essentially taught itself to see twice as well as any other computer on Earth. Give it an image and it will find another one like it. Tell it that the image is a cat and it will be able to recognize cats. Remember this took three days. How long does it take a newborn baby to recognize cats?
This is exactly how IBM’s Watson computer came to win at Jeopardy, just by crunching old episode questions: there was no underlying theory.
Let’s take this another step or two. There have been data-driven studies of MRIs taken of the active brains of convicted felons. This is not in any way different from the Google Vision example except we’re solving for something different -- recidivism, the likelihood that a criminal will break the law again and return to prison after release. Again without any underlying theory Google Vision seems to be able to differentiate between the brain MRIs of felons likely to repeat and those unlikely to repeat. Sounds a bit like that Tom Cruise movie Minority Report, eh? This has a huge imputed cost savings to society, but it still has the scary aspect of no underlying theory: it works because it works.
Scientists then looked at brain MRIs of people while they are viewing those billions of YouTube frames. Crunch a big enough data set of images and their resultant MRIs and the computer can eventually predict from the MRI what the subject is looking at. That’s reading minds and again we don’t know how.
Advance science by eliminating the scientists. What do scientists do? They theorize. Big Data in certain cases makes theory either unnecessary or simply impossible. The 2013 Nobel Prize in Chemistry, for example, was awarded to a trio of biologists who did all their research on deriving algorithms to explain the chemistry of enzymes using computers. No enzymes were killed in the winning of this prize.
Algorithms are currently improving at twice the rate of Moore’s Law.
What’s changing is the emergence of a new Information Technology workflow that goes from the traditional:
1) new hardware enables new software
2) new software is written to do new jobs enabled by the new hardware
3) Moore’s Law brings hardware costs down over time and new software is consumerized.
4) rinse repeat
To the next generation:
1) Massive parallelism allows new algorithms to be organically derived
2) new algorithms are deployed on consumer hardware
3) Moore’s Law is effectively accelerated though at some peril (we don’t understand our algorithms)
4) rinse repeat
What’s key here are the new derive-deploy steps and moving beyond what has always been required for a significant technology leap -- a new computing platform. What’s after mobile, people ask? This is after mobile. What will it look like? Nobody knows and it may not matter.
In 10 years Moore’s Law will increase processor power by 128X. By throwing more processor cores at problems and leveraging the rapid pace of algorithm development we ought to increase that by another 128X for a total of 16,384X. Remember Google Vision is currently the equivalent of 0.1 visual cortex. Now multiply that by 16,384X to get 1,638 visual cortex equivalents. That’s where this is heading.
A decade from now computer vision will be seeing things we can’t even understand, like dogs sniffing cancer today.
We’ve both hit a wall in our ability to generate appropriate theories and found in Big Data a hack to keep improving. The only problem is we no longer understand why things work. How long from there to when we completely lose control?
That’s coming around 2029, according to Ray Kurzweil, when we’ll reach the technological singularity.
That’s the year the noted futurist says $1000 will be able to buy enough computing power to match 10,000 human brains. For the price of a PC, says Ray, we’ll be able to harness more computational power than we can even understand or even describe. A supercomputer in every garage.
Matched with equally fast networks this could mean your computer -- or whatever the device is called -- could search in its entirety in real time every word ever written to answer literally any question. No leaving stones unturned.
Nowhere to hide. Apply this in a world where every electric device is a networked sensor feeding the network and we’ll have not only incredibly effective fire alarms, we’re also likely to have lost all personal privacy.
Those who predict the future tend to overestimate change in the short term and underestimate change in the long term. The Desk Set from 1957 with Katherine Hepburn and Spencer Tracy envisioned mainframe-based automation eliminating human staffers in a TV network research department. That has happened to some extent, though it took another 50 years and people are still involved. But the greater technological threat wasn’t to the research department but to the TV network itself. Will there even be television networks in 2029? Will there even be television?
Nobody knows.
It may be hard to believe but there was a time when people looked forward to new versions of operating systems. Before Windows XP many PC operating systems were not very good. The developers of applications had to code around problems. Companies wanted their business applications to be more reliable. Over the years operating systems improved.
Before Windows XP Microsoft had two PC operating systems. One was the descendant of Windows 95 the other of Window NT. In the years that preceded Windows XP Microsoft incrementally improved the user interface on the Windows 95 side and the reliability and performance on the NT side. Windows XP was the convergence of the best of both. Before XP Microsoft released a new version of its operating system almost every year. It would be almost 6 years until a successor to XP -- Windows Vista -- hit the market (with a thud). Six years was an impressive accomplishment, but still XP lived on. Windows Vista was not the market success Microsoft expected. Vista introduced too many changes. The market chose to stay with XP. It would be another two years before a true successor to XP emerged in Windows 7.
Why didn’t everyone upgrade to Windows 7? They didn’t need to. Since applications provide the real value, Windows XP users had everything they needed. The applications did everything they needed and the operating system was solid. There was very little value in upgrading.
XP is now 12 years old -- the same age as my son Channing -- and both are a little cranky. This week Microsoft officially ended support for XP. Now what’s a loyal XP user to do? Just remember it is the applications that provide the value.
I think Microsoft hasn’t been especially smart about the way it's ended support for XP. It could go on for years more making good money for Redmond. Not all support is gone -- did you know that? Microsoft still has to support the Government of Canada’s use of Windows XP. Governments can do things like that. And since Microsoft already has to provide this paid support to Canada, why not sell it to anyone else who wants to pay?
Here are the rules I’d set:
I’ve heard 90 percent of the critical vulnerabilities found in Windows XP could be mitigated by removing administrator rights. There are a few applications that require administrator rights to work properly, the most significant of which is Apple’s iTunes. All popular Windows XP applications should be able to operate with only simple user rights. If this is not the case the application owner should fix the application. If iTunes is a problem for XP then Apple should fix it.
The market has changed and the needs of Microsoft customers have changed, too. Upgrades need to be easier and cheaper. We can expect people to keep their workstations for five to 10 years. By the same token future operating systems should run fine on five-year-old hardware. You should be able to plan on your operating systems lasting from five to 10 years, too. Maybe we don’t need a new version every 2 or 3 years.
Microsoft should make the cost of upgrades cheaper. Given the size of the customer base and its reluctance to upgrade, I think there is more money to be made by lowering prices and increasing demand. It is truing that in the extreme with Windows Phone.
Microsoft made mistakes with Windows Vista and Windows 8. It needs to learn from those mistakes, but alas it probably won’t or there wouldn’t have even been a Windows Vista or Windows 8.
The Windows interface is the product of over 25 years of evolution. People understand it. They are comfortable with it. Imagine buying a car and finding out all the controls had been moved. That is what Microsoft did with Windows 8. Mobile devices and workstations are very different things. It may not be practical to have the same interface on both. Microsoft would be smart to get better tuned into the needs of its users and the market.
Historically the IT industry has been at its best when there is vigorous competition. The Linux community needs to do a better job of tuning into the needs of PC users. There are reasons the Linux desktop operating systems have not enjoyed the same level of success as has Windows. There are still rough spots and applications that are not available. The sooner there are solutions to these problems, the better.
Then we can stop reminiscing about the Good Old Days of Windows XP.
Facebook, trying to be ever more like Google, announced last week that it was thinking of building a global ISP in the sky. Now this is something I’ve written about several times in the past and even predicted to some extent, so I’d like to look at what Facebook has said so far and predict what will and won’t work.
Longtime readers will know I’ve written twice before (here and here) about satellite Internet and twice about aerial Internet, too (here and here), so I’ve been thinking about this for over a decade and even ran some experiments back when I lived in Charleston. Oh, and of course I am building an electric airplane described here.
What Facebook CEO Mark Zuckerberg revealed are plans to work through Internet.org to implement a global network using drones and satellites. In my view drones won’t work as proposed but satellites will. I’ll explain why then offer toward the end of this column what I believe is a more plausible method of building an aerial Internet.
Drones are a bad idea for this purpose if they are expected to be solar powered and run for weeks or months without landing. Think of it this way: the best use case for solar drones is operating at the equator where there’s lots of sun and it shines precisely 12 hours each day and the worst case is operating at the poles where winter operations simply won’t work and in any case the sun (when it shines at all) isn’t as bright. If you want year-round solar-powered operations in the middle latitudes where lots of people live, then, you’ll have to design for no more than eight hours per day of good sunlight which means 16 hours per day of battery-powered flight.
Wow, this is a tough order to fill! From an engineering standpoint the challenges here look to be insurmountable with present technology.
There is very little hard information available about these overnight solar drones but it is interesting to look at Titan Aerospace, led by Microsoft and Symantec veteran Vern Raburn, to see what’s typically proposed. The smaller of the two models described on the Titan website says the Solara 50 will have seven kilowatts of solar panels on the wings and tail surfaces with batteries in the wings. They never say but let’s guess the 50 refers to a wingspan of 50 feet (the other model is the Solara 60, which is larger but in neither case can I imagine the number refers to meters) which is a pretty big, if skinny, airframe.
If the Solara 50 generates seven kilowatts for eight hours per day with no battery losses at all (impossible) then it should be able to output for 24 hours 2333 watts or about 3.2 horsepower. Admittedly this is just an overgrown radio control glider, but it seems to me that 3.2 horsepower is too little to maintain altitude in the absence of thermal lift which is also dependent on sunshine.
Remember, too, that there’s a payload of Internet electronics that has to be operated 24/7 within that 7 kw power budget. I’m guessing if operation is at 60,000 feet that the biggest power consumer for the electronic payload will be heaters, not transmitters.
There’s no way such a vehicle could make it to 60,000 feet on its own unless they are counting on mountain wave lift, which isn’t everywhere. So I expect it will have to be carried aloft by a mother ship. Once launched at 60,000 feet with full batteries the glider will have the advantage that parasitic drag is far less at high altitudes, though (lift) induced drag is higher. Speed doesn’t matter because beyond fighting winds to stay in one spot there’s no reason to do much more than circle.
But wait! Circling itself significantly compromises the output of those solar cells since half the time they will be facing away from the sun. So maybe it doesn’t circle at all but just drifts with the jet stream and doesn’t try to maintain a station at all. This detail isn’t covered, by the way, by Zuckerberg’s manifesto. I wonder if he has thought about it?
Our best case, then, is a free drifting glider trying to maintain altitude overnight at 60,000 feet while operating its electronics package. Can it be done with 3.2 horsepower? That depends in large part on weight. To store the net positive power output of those solar cells will require a 111 KwH battery pack weighing with present Li-Ion technology about 144 kg. If the battery comprises half the weight of the drone that gives it a gross weight of 288 kg or 636.6 lbs. Now this just happens to be very near the weight of the electric Quickie I’ve been building and I calculate the minimum power to maintain level flight of that aircraft at around 3kw. Admittedly the Solara 50 flies slower than my Quickie though it is vastly larger, but then again parasitic drag matters very little at high altitudes and low speeds, the question still being is 2333 watts enough to do the job while still powering the electronics?
I say it’s iffy and iffy isn’t what you want to count on for reliable Internet service. So forget the solar-powered drones.
In contrast, a blimp augmented with solar power for station-keeping might actually work, which is probably why Google has settled on balloons for its Internet-in-the-sky. The reason why Google opted for balloons over blimps probably comes down to the power required for station-keeping. If they are just going to let it drift then a balloon is cheaper than a blimp but just as good. Score one for Google.
Satellites I think explain themselves quite well, the only problem is getting enough of them -- 1,000 or more -- to make a reliable network. The more satellites in the constellation the better and with space costs always coming down this is definitely the way to go, though it will take several years and cubic dollars to complete.
So is there some middle ground -- some way to make a cheaper more reliable Internet-in-the-Sky that can be up and running in a year or two? I think there is and I described it back in 2004:
Now -- strictly because I am twisted this way -- let’s take this experiment a step further. Sveasoft supports mesh networking, though with a practical limit of three hops. Aerial WiFi links of 10+ KM ought to be possible and maybe a LOT longer. The hardware cost of a WRT54GS and antenna are on the order of $100. There are, at the moment I am writing this, more than 1,000 small aircraft flying on IFR flight plans in the U.S. So for not very much money you could have a 1,000-node aerial mesh that could serve not only airborne but also terrestrial users. Triple the money, and you could put in each plane a Locustworld mesh with two radios for each node and truly robust mesh networking.
Updating this for 2014 and taking into account the interest of Facebook, I’d advise Mark Zuckerberg to put a mesh-enabled Internet access point on every one of the more than 23,000 active airliners in the world today. Most of those aircraft are in the air at least eight hours per day so at any time there would be about 8000 access points aloft, conveniently going to and from population centers while overflying remote areas. If each access point was at 30,000 feet it could serve about 120 square miles. Figuring a 50 percent signal overlap those 8,000 airplanes could offer Internet service, then, to about 480,000 square miles. That’s hardly stellar coverage, I admit, and means a hybrid system with satellites and airplanes makes more sense, but it could come at zero cost (charge passengers for Internet service) and would mean that no airliner would ever again be lost at sea without being noticed or tracked.
Cisco Systems this week announced its $1 billion Intercloud that will link nine partner companies to offer an OpenStack-based, app-centric cloud system supposedly aimed at the Internet of Things. That’s a lot of buzzwords for one press release and what it means is Cisco doesn’t mean to be left behind or to be left out of the IT services business. But Cisco’s isn’t the big cloud announcement this week: the really big announcement comes today from little Mainframe2.
This morning at the big nVIDIA GPU Technical Conference in Silicon Valley Mainframe2 demonstrated two new PC applications -- Google Earth and Microsoft Word -- running on its graphical cloud. This is significant not only because it implies (there’s been no announcement) that Mainframe2 has two new customers, but both companies are cloud vendors in their own right, so we can guess that Mainframe2 will be supported at some point by both Google’s cloud platform and Microsoft Azure.
Mainframe2, as you’ll recall from the two columns I’ve written previously about it (here and here), is a startup that enables cloud hosting of graphically-intensive PC applications. If you are a software developer and want to put your app on the cloud, Mainframe2 claims you can do so in 10 minutes or less and for almost no money. The app runs on a cloud of nVIDIA virtual GPUs with the screens painted as HTML5 video streams. This means you can effectively run Windows apps on your iPad, for example.
The first apps demonstrated on the Mainframe2 platform were from Adobe Systems and Autodesk. Now it has added Google and Microsoft. Oh, and Firefox is now a supported browser in addition to Chrome, Opera, and Safari.
I don’t know any more about this than you do at this point but let’s take some guesses about where this is headed.
All of these software companies that have allowed their applications to be demonstrated on Mainframe2 are potential -- even likely -- customers for the company. Google and Microsoft, as cloud vendors, are likely to license Mainframe2 (and its underlying nVIDIA Grid) in some form for their clouds. Mainframe2 launched originally on Amazon Web Services but I have to believe that support will shortly appear from a whole list of the usual suspect cloud service providers. That means Google and Microsoft will likely be offering their own graphical clouds.
While previous Mainframe2 demos were run from a single data center on the US west coast, the new demos are supported from data centers on the US east coast, Europe, and Japan as well.
Given that this week’s demo of Microsoft Word on Mainframe2 can’t officially run on Internet Explorer, you can bet Redmond will be fixing that problem shortly.
I’m not here to announce Game Over, but it seems to me the addition of these companies that normally don’t have anything to do with each other to the Mainframe2 list gives this little company an insurmountable lead in cross-platform cloud support for traditional desktop applications. Now all we need are Linux and Mac apps on Mainframe2. I’m sure the former will be coming and I’m not so sure about the latter but we’ll see.
This is a kick in the head to competing efforts that are based on protocols like VNC and RDP, which simply can’t repaint the screen as fast as Mainframe2. Think about it, RDP is a Microsoft technology yet here Microsoft is appearing to support Mainframe2. That’s a big deal.
Going even further, Mainframe2’s ability to dynamically reassign virtual GPUs to a task implies a great leveling in the desktop arms race. Once a very broad selection of popular applications are available on Mainframe2 it won’t matter beyond a certain point how many cores or how much RAM you have on your desktop (or mobile!). Any computer will be able to run any app on any platform at any speed you are willing to pay for, so my three-times-per-year use of Photoshop is going to fly.
This is a huge change in the market that PC hardware vendors will hate but PC software vendors should love because it will give them a whole new -- and much broader -- distribution channel. Sure there will be customers who’ll still choose to run their apps locally, but for another class of casual users there is a new alternative. And for the software companies and cloud vendors there’s a whole new source of revenue.
What we’ll see in the next 1-2 years is broad adoption of this platform with eventually most ISVs offering Mainframe2 versions of their products. Some companies will commit to the platform exclusively, I’m sure. But this period is like porting all your music from vinyl to CDs: there’s a lot of money to be made but it’s mainly just doing the old stuff in a different way. What I wonder about is what happens after this stage, when we start to see native Mainframe2 apps? What will those be like? When will we see the first native Mainframe2 game, for example? What will that be like?
This is going to be exciting -- something we wouldn’t even have dreamed of a year ago.
Pat McGovern died this week at 76 in Palo Alto, totally surprising me because I didn’t even know he had been ill. Uncle Pat, as we called him, was the founder of Computerworld back in 1967 and the year before that research firm International Data Corp., started in his suburban Boston kitchen. Pat helped turn the computer business into an industry and employed a lot of people along the way including me. He was an exceptional person and I’d like to tell you why.
Pat ran a company that published about 200 computer magazines all over the world. Each December he traveled the globe to give holiday bonuses to every employee he could find. The bonuses were a meaningful amount of crisp cash money in an envelope that Pat would hold in his hand until he’d finished his little speech about how much he appreciated your work. And here’s the amazing part: he knew what we did. He read the magazine, whichever one it was, and knew your contribution to it. You’d get a smile and a handshake and 3-4 sentences about something you had written or done and then would come the envelope and Pat would be on to the next cube.
This by itself is an amazing thing for an executive to do -- traveling all month to hand out 3000 envelopes and doing it for more than 30 years. Does your CEO do that?
But wait, there’s more!
When I was fired 18 years ago by InfoWorld Pat knew about it. It was a big enough deal that someone told him and he didn’t stop it because he trusted his executives to do the right thing. But when it became clear after months of rancor and hundreds of thousands in legal bills that it hadn’t been the right thing Pat and I met in a hotel suite in New York, hand wrote an agreement and he offered me back my old job.
Understand this was my boss’s, boss’s, boss’s boss asking me to return.
I couldn’t do it, but we stayed in touch. Pat invested in a startup of mine (it failed). He made a stalking horse offer for my sister’s business getting her a better deal in the process. He always took my calls and always answered my e-mail. And when he did he always threw-in 3-4 sentences to show that he had been reading or watching my work.
We weren’t close by any means, but we respected each other.
There is something important in this about how people can work together, something I hope you find today in your organization. I never worked for Pat McGovern. I was always down in the engine room, shoveling words. But Pat knew about the engine room, understood its importance, and realized that the people down there were people and he ought to try to know them. The result was great loyalty, a better product, and a sense of literal ownership in the company that I retain today almost two decades later.
It wasn’t perfect. The guy let me be fired! But he was the only one who ever apologized for it and that takes a big man.
A black swan is what we call an unexpected technical innovation that disrupts existing markets. Intrinsic to the whole black swan concept is that you can’t predict them: they come when they come. Only today I think I’ll predict a black swan, thank you, and explain exactly how the automobile business is about to be disrupted. I think we’re about two years away from a total disruption of the automobile business by electric cars.
One of the readers of this column is Robert Cumberford, design editor at Automobile Magazine. Nobody knows more about cars than Bob Cumberford, who has written about them for more than half a century. Here’s what he told me not long ago about the Tesla Model S:
"Since the entire automotive industry delights in bad-mouthing electric cars, and no one expects them to amount to anything significant, I predict that the Great Unwashed will enthusiastically embrace electric vehicles as soon as there is a direct personal experience. I’ve been professionally involved with cars for 60 years now and can say with the certainty based on having driven perhaps three thousand different cars over that period, perhaps more, but not surely not fewer, that the Tesla Model S is the best car I’ve ever driven. Oldest was probably a 1914 Benz from the Mercedes museum collection, newest whatever I drove last week. Wide range of experience, then. Nothing better than the Tesla. Faster? Sure. Sportier? Absolutely. But better? Nothing. I want one. Probably will never have one, but the desire is there, and will be assuaged by something electric one day Real Soon Now.
"I see the acceptance of electric cars happening in a sudden rush. Maybe not this year, maybe not for a couple of years yet. But it will happen in a magic rush, just as the generalized adoption of computers happened in only a few years".
There are two obvious problems with electric cars today (this is Bob Cringely writing with thanks to Bob Cumberford) and they are driving range and cost. My neighbor Avram Miller drives a Nissan Leaf and loves it, but the Leaf won’t make it all the way to San Francisco and back so Avram requires a second car. Any car that needs you to have another car to make it practical can’t qualify as a black or any other variety of swan. What’s needed is a single car solution.
You can get exactly that in a Tesla Model S but it costs too much. The base model starts around $55K with the top-of-the-line running around $95K and the only significant difference between the two is how much battery capacity and therefore driving range you have. A $95K Tesla Model S has plenty of range to qualify as a single car solution but it just costs too darned much money.
As a second data point to confirm Cumberford’s opinion of the Model S, Computer History Museum chairman Dave House (ex-Intel) drives a Model S and says it’s his favorite car, ever. Dave’s other car, by the way, is a Bugatti Veyron. Now there’s a bumper sticker!
If only the Model S cost, say, $20K, right? I think that’s coming, though maybe not from Tesla.
The black swan we are talking about here isn’t a car but a power train and probably more specifically a battery technology. All that’s required for electric car to really break out is a way to make cheaper batteries and I am sure that’s coming.
Elon Musk of Tesla says he is going to make that happen by building a $5 billion lithium-ion battery factory, driving down the cost of manufacturing. This will work, I’m sure, and I applaud Elon for his commitment. But I strongly suspect that it will end up being a $5 billion boondoggle. The better approach would be to abandon lithium-ion for a superior battery technology.
There are dozens of startups working today on alternative battery chemistries intended to dramatically increase the range and decrease the cost of electric cars. I don’t know which of these will ultimately dominate but I am sure one will which is why I can be so confident in predicting a black swan. With dozens of groups working on the problem and an eventual market worth probably $1 trillion I have no doubt there will be a solution within the next couple of years.
Here’s one of my old friends describing his work in this area: "Last year I produced a sulfur-lithium-lead cell combined with a carbon-aluminum ultra capacitor. The ultra capacitor layers form the separator between lithium cells. Such a configuration charges ~ 10 times faster, has twice the energy density of conventional lithium-ion and no real limits as to surge current so no over heating that would lead to boom".
Maybe my friend has the solution but it’s just as likely it will come from someone else. My point is that the solution is coming. Twice the range, one tenth the recharging time and safer, too. That’s a black swan.
Regular readers will know that I’ve had my doubts about Bitcoin. Recent events in the Bitcoin world, especially the failure of Mt. Gox, the biggest Bitcoin exchange, have caused further problems for the crypto currency. But I’m oddly cheered by these events and am beginning to think Bitcoin may actually have a chance of surviving as a currency.
Willy Sutton, who made his career robbing banks, once explained that he robbed them "because that’s where the money is". Well recent bad news in the world of Bitcoin follows a similar theme: yes there have been thefts, corruption, and a suicide, but all this is based not on Bitcoin’s failure but on its success. The wonder isn’t that Mt. Gox lost $460 million in Bitcoins but that it had $460 million in Bitcoins to lose.
These events are growing pains, nothing else, and the fact that Bitcoin values have staggered each time and then quickly recovered shows that the market also believes this to be true.
All the recent breaches (if they are breaches -- there’s some question now in the case of Mt. Gox) have been based on bad security at these sites. Mt.Gox was run by a Magic the Gathering site. Big money attracts sophisticated teams of hackers, which is why real banks hire sophisticated people to protect them. Dudes running Magic the Gathering sites aren’t that. They don’t have a chance. So look for more Bitcoin exchanges to fall until security standards rise enough to make such thefts less attractive.
Bitcoin itself has remained secure so far. Owning your Bitcoin yourself (encrypted, on your own machine with backups) is still quite safe. The issue is that if a machine with Bitcoins on it can be compromised, the Bitcoins can be sent away anonymously. They are ridiculously easy to fence.
The bottom line is there appears to be a real niche for pseudonymous currency, even if mainly for the subversive world. Also, it opens new doors for electronic payments.
Famed economist Nouriel Roubini -- Dr. Doom from New York University -- claims Bitcoin is a Ponzi scheme which makes me like the currency more and more. It makes sense he’d see it that way but then the Argentinian peso feels pretty Ponzilike today, too. Only time will tell but if the value endures then it’s not a scheme. It’s weird, but not a scheme.
So if Bitcoin gets killed it will probably be because of something that replaces it.
If the announcement of Facebook paying $19 billion in cash and stock for WhatsApp surprised you then maybe you’d forgotten this prediction I published on January 8th:
#6 -- Facebook transforms itself (or tries to) with a huge acquisition. I wrote long ago that we’d never see Facebook in the Dow 30 Industrials. The company is awash in users and profits but it's lost the pulse of the market if it ever had it. Trying to buy its way into the Millennial melting data market Facebook offered $3 billion for Snapchat, which turned it down then rejected a $4 billion offer from Google. Google actually calculates these things, Facebook does not, so where Google will now reverse-engineer Snapchat, Facebook will panic and go back with the BIG checkbook -- $10+ billion. If not Snapchat then some other overnight success. Facebook needs to borrow a cup of sugar somewhere.
Now $19 billion may still seem like too much money but remember the alternative for Facebook is oblivion. Facebook stock is overpriced, making the acquisition cheaper for the company than it looks. On a per-user basis it’s still substantially below Facebook’s own numbers. And part of the reason for that big number is simply to have it be a big number -- big enough to make the point to Wall Street that Facebook is determined to buy its way in front of the wave.
The only limit on what it would have paid for WhatsApp, in fact, was that it had to leave something for the next big acquisition, because this is not the end. Look for another $5+ billion acquisition soon for Facebook (maybe higher if it can use all or mostly stock).
This is what happens, you see, when the mojo goes, leaving only money behind. You spend it trying to appear youthful again.
My e-mail inbox this morning contains 118,306 messages totaling about seven gigabytes. I really should so something about that but who has the time? So I keep a lot of crap around longer than I should. I have, for example, every message I have sent or received since 1992 when I registered cringely.com. Those obviously occupy a lot more than seven gigabytes, though interestingly enough the total is less than 20 GB. My storage strategy has been a mixed bag of disks and cloud services and probably stuff I’ve forgotten along the way. So I’ve decided to clean it up by standardizing on Microsoft’s OneDrive (formerly SkyDrive) cloud storage service, just relaunched with its new name. I need about 30 GB of storage right now but I don’t want to pay for anything.
No problemo.
Microsoft gives away free OneDrive accounts with 7 GB of storage. If you save your mobile pictures to OneDrive Microsoft ups that to 10 GB. If you get a friend to sign up the company will give you another 500 MB of free storage for each friend up to a maximum of 5 GB.
So I signed-up for the free 10 GB photo plan, created two more email addresses on my mail server then recommended those accounts sign up to OneDrive, too, which they obediently did. So in a few minutes I had turned my OneDrive into a virtual ThreeDrive and exceeded my 30 GB storage goal, taking it to a total of 31 GB.
This all requires some management on my part, but it’s just writing little scripts to store, find, or restore files or the contents of files. Everything is encrypted of course, so there’s nothing for Microsoft or the NSA to read without a lot of effort.
How can Microsoft afford to do this? Well for one thing I doubt that one user in 100 would bother to replicate my effort. For another, Microsoft is probably grateful for the hack because I wrote about it and helped promote the service's name change. It’s a win-kinda-win scenario.
Now what to do about all my video files? I have those on more than 30 external disks, some of the old ones only 1-2 GB in size while others are up to 2 TB. So far I’m chugging through the disks one at a time on an old Pentium 4 box running Spinrite 6 from Gibson Research -- still the best file recovery program you can buy. Like all the software I use I paid for my Spinrite copy, which causes Steve Gibson to comment every time we communicate. According to Steve I may be the only blogger who pays. Silly me.
Having a dedicated Spinrite machine makes a lot of sense because restoring a really corrupted disk that’s been sitting since 2005 or so can take up to a week of continuous churning. I’ll eventually move it all to some kind of Network Attached Storage, this time a multi-disk array with clever self-maintenance routines copied across to a similar box with my mother-in-law back in Charleston.
I wonder if the NSA knows about her?
One more thing: if this column seems to contain a lot of fluff that’s because I’m madly trying to finish my damned IBM eBook, which turned out to be not at all the lark I expected. I’d forgotten how hard books are to do. But this one is nearing its end and includes quite a bit of shocking new material never published before. I like to think it will have been worth the wait, but you’ll have to decide that.
Tech news changed last week faster than the weather. At the beginning of the week Charter Communications was trying to buy Time-Warner Cable, then on Tuesday Apple was rumored to be close to a deal for Apple TV to replace or augment Time-Warner’s cable boxes, then on Thursday both stories crashed and burned when Comcast bought TWC out from under Charter, killing the Apple deal in the process. But does it really have to end that way? Not if Apple is smart.
I don’t care about cable consolidation, frankly, though a lot of other people do, seeing too much power being concentrated in Comcast. I would just like to see things shaken up in the TV industry bumping services quickly forward to where I’ll only have to pay for the stuff I actually want to watch. I suspect that’s where the Apple-TWC deal was heading. Apple would pay TWC for the privilege of taking over a substantial part of the cable company’s workload, cutting costs and raising TWC profits in the process. It was a desperate attempt on TWC’s part to avoid the clutches of John Malone’s Charter Communications.
Let’s be clear, TWC going with Apple would have undercut cable hegemony, something the company would only do as a last resort.
Then Comcast appeared, playing the white knight, cutting a deal for just south of TWC’s asking price of $160 per share (Charter had offered $132). Both Charter and the threat posed by Apple were thwarted since John Malone made it clear he wasn’t willing to increase his offer for TWC beyond $132 per share.
But what about Apple?
There will eventually come a time when the cable cabal is broken. Intel couldn’t do it last year but Intel frankly isn’t as smart as Apple. Maybe Apple can’t do it, either, but let’s consider what it appears to be giving up by walking away from this deal: Apple would be giving up a chance to revolutionize TV just as it has already done with the music industry.
American TV, depending on how you measure it, is about a $100 billion market. World TV is about $400 billion. That’s a whole lot of fresh new dollars to be gathered by Apple, which really wants and needs continued growth. Why wouldn’t it go for it?
Why wouldn’t it be worth it to Apple to put up the extra $30 per share John Malone would require to take Time-Warner Cable back from Comcast? For that matter, why doesn’t Apple just buy TWC outright? It has the money.
Apple buying TWC would have far fewer anti-trust and restraint-of-trade problems than would the proposed Comcast deal.
If Apple really cares about television and video entertainment, if it sees this as a unique chance to introduce new business models and further expand into a big new market, why wouldn’t it do it?
It might. I hope it does. It’s just a question of balls.
This story might not be over at all.
Microsoft has a new CEO in former cloud and server chief Satya Nadella and readers have been asking me what this means? Certainly Nadella was the least bad of the internal candidates but an external selection would have been better. Whether it works out well or not probably comes down to Bill Gates, who leaves his job as chairman to become Nadella’s top technical advisor.
You might ask why Nadella, whose technical chops are easily the equal of BillG’s (and a lot more recent, too) would even need Gates in that advisory role? I believe the answer lies in my recent column where I argued that the best new Microsoft CEO would be Gates, himself, because only he could stand up to departing CEO Steve Ballmer.
Ballmer still owns 333 million Microsoft shares, has a huge ego, and that ego is likely to be invested at first in bullying Nadella toward following line-for-line the devices and services strategy Ballmer came up with last year that so far isn’t working too well. If Nadella wants to veer very far from that path by, for example, getting rid of Nokia or making Microsoft an enterprise software company, only Gates will be able to stand between the two men and, frankly, spare Nadella’s job.
This promotion is at best a compromise. My understanding is that the Microsoft board really wanted Alan Mulally from Ford to come in and clean house for a couple years before handing a much leaner company over to a younger successor. It would have been a smart move. But Mulally didn’t want to have to deal with either Gates or Ballmer. Why should he? Mulally’s price for returning to Seattle, I’ve been told, was for Gates to give up the chairmanship and Ballmer to leave the board entirely. Ballmer wouldn’t budge (with $12+ billion in Microsoft shares I might not have budged either) and so Mulally wisely walked.
Let’s assume, then, that Nadella comes into his new job with some immunity to Ballmer so he can make at least a few dramatic changes. What should those be? I’m not going to give the guy advice here but I will say what I expect to happen.
The Xbox isn’t going anywhere. Those who have suggested Microsoft sell its console game platform aren’t thinking that process through very far. What Microsoft needs more than anything else is to be in markets where it can be first or second in market share. Xbox qualifies there and so the company can’t and won’t sell the division even if Nadella transforms the rest of the company into enterprise software.
Speaking of enterprise, that’s where the money is, as IBM has been showing for the last 15 years. Pundits who have been suggesting Microsoft drop consumer Windows to $20 don’t understand that doing so would undermine the larger enterprise market at $100. Rather than chase a waning market it is better to stand firm on consumer and chase the still-growing enterprise. Readers should understand this is me speaking not as a consumer but as a pundit, so this is as much about Microsoft’s corporate health and anything else.
Nadella was Microsoft’s cloud guy and has to know that business is a quagmire of low margins and dubious returns. I’m not saying Microsoft doesn’t belong there because the cloud has become vital in different ways to every part of its business, but I am saying that Microsoft will not survive as mainly a cloud company.
Nokia is a crap shoot tied to the success of Windows Phone, which I don’t think is even possible. Microsoft can’t afford to be number three and losing money, especially while they are making $2 billion per year already from Android royalties. I think Nokia will eventually be resold much as Motorola Mobility was sold by Google.
No devices, then -- at least not inherently mobile ones -- for that devices and services strategy. Ballmer won’t like that.
What Microsoft should do with Windows Phone is kill it and embrace Android. This probably sounds odd to some, but Microsoft is fully entrenched in enterprise and the future success of enterprise will depend on the company’s ability to seamlessly integrate all its data center offerings with mobile clients. They can do that by being successful with Windows Phone except that won’t happen or they can embrace Android and do whatever it takes to make Android work beautifully in a Microsoft environment. This would leverage a Microsoft strength and take advantage of an Apple weakness as the latter company proudly ignores the enterprise in favor of individual users.
Microsoft’s route to success in mobile, then, is by becoming the next Blackberry.
I think most of these things will eventually happen and at least one or two of them will start under Nadella. Whether he survives the inevitable Ballmer backlash is something I can’t know.
IBM today sold its Intel server business to Lenovo, yet another example of Big Blue eating its seed corn, effectively dooming the company for the sake of short-term earnings. It’s a good move for Lenovo and an act of desperation for IBM.
Wall Street analysts may see this as a good move but then Wall Street analysts typically aren’t that smart. They’ll characterize it as selling-off a low-margin server business (Intel-based servers) to concentrate on a higher-margin server business (Z-series and P-series big iron) but the truth is IBM has sold the future to invest in the past. Little servers are the future of big computing. IBM needs to be a major supplier and a major player in this emerging market.
If you look at the technology used today by Google, Yahoo, and Amazon and many others you’ll see it is possible to operate a large enterprise on huge arrays of inexpensive Intel servers. For a fraction of the cost of an IBM Z-Series (mainframe) or P-Series (mid-range UNIX) system, the equivalent compute power can be assembled from a modest number of low cost servers and the new software tools. IBM turned its back on this truth today by selling the Intel server business.
Maybe this wouldn’t matter if IBM was selling a lot of those higher-margin Z- and P-series machines, but from the look of its latest earnings statement I don’t think that’s the case. So it is selling a lower-margin business where customers are actually buying to invest in a higher-margin business where customers aren’t buying. Yeah, right.
Information Technology is entering a commodity era of computing. Mainframes, mid-range computers and servers are becoming commodities and IBM needs to learn how to operate in a commodity market. IBM needs to become the lowest cost, highest volume producer of commodity servers. Developing new million dollar Pure systems will not bring the business needed to IBM’s Systems and Technology group.
Somebody in Armonk has to know this, right?
IBM needs to embrace the new era of large arrays of inexpensive Intel Servers. IBM needs to adapt its mainframe and mid-range applications to this new platform. The world is moving in this direction. Selling the Intel server business is the exact wrong thing to do for the long term health of IBM.
Inexpensive servers do not necessarily have to be Intel based. IBM could become the leader of large arrays of inexpensive Power- and ARM-based servers. The market is moving to commodity processors. IBM needs to evolve too and be part of that future. But as today’s news shows, it isn't evolving and won’t evolve. I fear the company is doomed.
And on that note you may be wondering about my eBook on IBM I said would out by Christmas. Family matters intervened but it now looks like the book will appear on Amazon around the second week of February. You can be sure I’ll put a link right here when it is available.
I say it "looks like" the IBM book will be available then not because there’s any doubt about its timely completion but because of the way the publishing industry works. Some of you may remember that I’ve been slaving away for almost a year on a top-secret book for a major publisher. Well in the depths of that long book contract was a clause giving that major publisher an option on my next two books, which turns out to include this eBook on IBM.
If they like the finished manuscript they may grab it and I have no idea what that means except it sure won’t result in the book appearing sooner.
We’re all equally clueless in this one.
Last week the US Court of Appeals for the District of Columbia shot holes in the US Federal Communications Commission’s version of net neutrality saying the Commission was wrong not in trying to regulate Internet Service Providers but in trying to regulate them as Common Carriers, that is as telephone utilities.
The FCC can’t have it both ways, said the Court, and so the Feds get to try all over again. Or will they? I think events are moving so quickly that by the time this particular argument is worked out all the players will have changed and the whole argument may be moot.
If you read the court’s near-unanimous decision they leave the Commission with two choices: 1) declare ISPs to be Title 2 Common Carriers (phone companies) or; 2) find different language to achieve net neutrality goals within a Title 1 regulatory structure (information services), which might be hard to do.
Under Title 2, voice service is considered a basic consumer right and not to be messed with. That’s how net neutrality proponents would like to see Internet service, too. Instead it is currently classified under Title 1 as an information service such as SMS texting. Your phone company doesn’t have to even offer SMS, nor do they currently have to offer Internet service. See the distinction?
Some pundits are saying the answer is to switch Internet service from Title 1 to Title 2 regulation. This is not going to happen. Yes, the Court says that’s the way to do it, but in the real world of US politics and government it won’t happen.
The time for it to have happened was when the current rules were made circa 2001. Back then an arbitrary decision was made to throw Internet service into Title 1 and I simply don’t recall much debate. The guy who made that decision was Michael Powell, then FCC chairman. Michael Powell (son of Colin Powell and not my favorite Mike Powell -- owner of the world long jump record) is now president of the National Cable & Telecommunications Association (NCTA), which is a trade association representing cable and telephone companies.
Michael Powell -- the guy who put Internet service into Title 1 in the first place -- says that his organization will do whatever it takes to keep the FCC from shifting Internet service to Title 2. That means lobbying and political donations but it could also mean going to court. That’s why net neutrality is dead. It was probably always dead thanks to Powell’s original decision more than a decade ago. We just didn’t know it.
Instead of mourning net neutrality, throwing a tantrum, or trying vainly to figure a way to make the FCC do what it is clearly not going to do, let’s look inside the whole issue of net neutrality to see if we really even need it. I’m not sure we do.
Net neutrality in the era of dial-up Internet was key because we were sipping our data through a very thin straw. V.92 modems could ostensibly download 56 kilobits-per-second while typical US broadband service of today is 3-5 megabits-per-second -- at least 60 times faster. Our straw today is a lot thicker. This is meaningless to ideologues but it has great practical meaning for actual users. The core technical issue -- then and now -- is packet prioritization. Under net neutrality no data packet is supposed to be better than any other which means Voice-over-IP and video streaming get no priority so Internet phone and TV services suffer, or ISPs fear they will.
Why should ISPs even care about such things? They don’t, actually, except as a way to make more money by selling packet priority to the highest bidder. The big ISPs, led by Verizon, want to goose their revenues by selling E-tickets to content providers.
It’s godly free enterprise against ungodly socialism, we’re told, when the reality is more like we might have a slightly less optimal porn experience.
But wait, there’s more! There are parallel battles taking place right now around what are similar, but not precisely identical, issues. On the one hand there is net neutrality and on the other bufferbloat, a technical issue I have written about many times. Net neutrality and bufferbloat are apples and oranges except that they both achieve similar aims. If we cure bufferbloat the effect of having lost net neutrality will go completely unfelt. We’ll get back such higher performance having defeated bufferbloat that whatever we lose to ISP greed won’t even be detectable.
Let me repeat that with slightly different words. Net neutrality is a policy problem. Bufferbloat is a technical problem. The people who are all upset about net neutrality typically have never even heard of bufferbloat. Yet once bufferbloat is solved the operational difficulties presented by the loss of net neutrality (whatever it is that I want to do online is affected by ISP’s selling packet prioritization) will probably become undetectable.
Bufferbloat is being solved. Network hardware vendors are aligning to fix the problem, each of them seeing in it a chance to sell us all new stuff. This will happen. It’s only a matter of time.
So maybe we shouldn’t care about net neutrality. I don’t care about it. But this appeals to me as a student of business tactics because that’s what we’re likely to see played out over the next year or so. The ISPs want to open-up a new product line to sell -- packet prioritization. But in order to do that they have to first generate need for it.
That need has been provided so far in part by the very people seeking net neutrality. What these folks don’t realize is the ISPs have been using the threat of net neutrality as a marketing message for packet prioritization. To this point it has mainly been used to sell extra bandwidth -- bandwidth that has been sold as a solution of sorts to the very real problems of sending streaming media over the Internet. Ironically more bandwidth doesn’t solve that problem at all, because the problem isn’t a lack of bandwidth, it’s bufferbloat.
So the ISPs, for all their fighting against net neutrality, have actually needed it to sell more bandwidth to content providers. And the bogeyman the ISPs have identified in net neutrality isn’t real at all -- what’s real is bufferbloat.
Ideologues are fighting for net neutrality and ISPs have been fighting against it when they both should have been fighting against he real problem -- bufferbloat.
The reason this hasn’t been made clear is because the ISPs have been making a lot of money off of bufferbloat and are determined to make a lot more before it is cured.
Here is where it gets really interesting for me. If you are a content provider (say Netflix) how much bandwidth do you need and how much bandwidth do you use? If you actually want to send four billion bits-per-second over the Internet, your backbone provider will recommend you buy 10 billion. Running at about 40 percent capacity is considered a good rule-of-thumb.
Considered by whom?
Now that’s a great question -- by the people who are selling you the bandwidth of course.
This is, as my late mother would have said, bullshit.
If you need four gigabits you shouldn’t have to buy 10 gigabits.
Here’s where we need to distinguish between the people who want bandwidth and the people who pay for it. Get ready for a history lesson.
The Internet and the Arpanet were built around Ethernet which in those days relied on Code Division Multiple Access with Collision Detection (CDMA/CD). The network was a single data bus with all the workstations and servers connected together. In order to work efficiently these devices had to communicate one at a time. In order to do that they adopted a Collision Detection scheme. Think of this like an old telephone party line if you are old enough to remember those (I am).
A party line allowed several houses to share one phone line. You could, if you were nosy, actually listen to your neighbor’s calls if you put your hand over the mouthpiece to keep them from hearing you. When you wanted to make a party line call, then, you’d first listen to hear if someone was already on the line. If they weren’t then you could dial. With CDMA/CD over Ethernet you’d do the same thing -- listening before dialing. If packets were detected the CDMA part would use a random number generator to determine when to listen again.
CDMA/CD did not allow 100 percent utilization of available bandwidth on the data bus. All that listening and waiting cut into the number of bits you could send. In truly practical terms the amount of data you could actually send over Ethernet was limited back then to about 40 percent of the rated bandwidth. That’s where we get the rule-of-thumb that we need to provision about 2.5-times as much bandwidth as we actually use.
Except we are no longer living in 1973. Ethernet is no longer coax. Networks are switched, not a common bus. The old rationale for provisioning bandwidth is meaningless today except as a way to justify selling or buying more than you actually need.
Now here’s an interesting point: if a byte falls in the forest and nobody is there to hear it does it make any noise? Joking aside, when a bandwidth provider suggests that you provision 10 gigabits for a four gigabit service, how many gigs are they actually selling you? Why four gigabits of course! You are paying 2.5 times as much as you should.
But this doesn’t mean our current switched networks are perfect by any means. It’s just that the solution of over-provisioning doesn’t actually solve the modern problem, which is bufferbloat.
Wait a minute! Didn’t I just say that bufferbloat was about to be solved? If that happens our networks are going to suddenly start working a lot better, our legitimate bandwidth budgets are going to drop, and as a result backbone providers and ISPs are going to see a drop in both revenue and profit.
That’s why Verizon and the others need to be able to sell packet prioritization as soon as possible. They need a new -- hopefully even bigger -- source of revenue before bufferbloat is solved and the Internet becomes a buyers’ market. Those who are fighting against the ISPs on this issue are actually working to help those ISPs make their selling case to content providers. The law seems to be favoring the ISPs, too.
I’m pretty sure the ISPs will prevail. Net neutrality will fail. Nobody’s service will be hurt as a result because bufferbloat will be going away. But content providers will by then be paying for packet priority they probably never needed.
Welcome to capitalism.
We’re generally a Macintosh shop here in Santa Rosa. I have Windows and Linux PCs, too, but most of the heavy lifting is done on Macs. Next Wednesday I’m expecting a delivery from B&H Photo (no tax and free shipping!) of four new iMacs plus some software totaling $5,407. I fully expect these to be the last personal computers I will ever buy.
How’s that for a 2014 prediction?
Moore’s Law doubles the performance of computers every couple of years and my old rule of thumb was that most people who make their living with computers are unwilling to be more than two generations behind, so that means no more than four years between new PCs. And that’s the logic upon which the market seemed to function for many years. But no longer.
Wall Street analysts have noted the slowdown in PC sales. Servers are still doing well but desktops and notebooks are seeing year-over-year declines. Even Apple is selling fewer Macs and MacBooks than before. This trend is unlikely to be arrested… ever.
The computers I’m replacing are those of my wife and kids -- three Mac Minis from 2007 and a 2008 iMac. The Minis were all bought on Craigslist for an average cost of $300 while the iMac was an Apple closeout almost six years ago.
I don’t get a new PC this time but I did spend $400 to rebuild my 2010 MacBook Pro with 16GB of RAM, a big hybrid drive and a new, higher-capacity, battery, which should be plenty for the next 2-3 years.
The Minis are for the most part okay, but my kids tell me that the next version of Minecraft will no longer support their GPUs. None of them can be upgraded to the latest version of OS X, either. They’ve reached the end of the line as desktops so I’ll probably use them as servers running Linux. None of them will be sold or thrown away.
The 2008 iMac has been slowly losing its mind and now goes into occasional fits of spontaneous rebooting that I think is related mainly to overheating. I’ll leave it turned off until it’s time to copy over to the new machine. Until then Mary Alyce can use her 2008 MacBook.
What we have here is a confluence of three trends. The first trend is marginal performance improvements over time. Yes, a new PC will make your spreadsheet recalculate faster but if you can’t feel the improvement -- if the change isn’t measurable in your experience -- is it an improvement at all? Why change?
Gamers will always want faster computers, but a second trend will probably satisfy them. That trend is exemplified by the Mainframe2 column I wrote a few weeks ago. Apps requiring a lot of processing power are starting to migrate to the cloud where massive crowds of GPUs can be applied only as needed making much more efficient use of resources and allowing a PC gaming experience on devices like tablets. Adobe especially seems to be embracing this trend as a new way to generate consumer app revenue. They will be followed eventually by all software companies, even game companies.
Deliberate obsolescence can push us toward replacing a PC, but replacing it with what? The third trend means the next PC I buy for my kids won’t be a PC at all, but a phone. I wrote about this before in my column The Secret of iOS 7.
This is the key transition and one that bears further explanation. Apple is happy with it while Microsoft and Intel are not. Apple is happy because the average length of time we go before replacing a phone is actually getting shorter just as our PC replacements are getting farther apart. Long driven by two-year cellular contract cycles, the average life expectancy of a mobile phone is 18 months and in a couple years it will probably drop to 12 months.
Look at this from Apple’s perspective. If they sell us computers every five years at an average cost of $1,200 and a gross profit margin of 40 percent that’s $480 in profit over five years or $96 per customer per year. But if Apple sells us an iPhone every 18 months at a real cost of $500 (usually hidden in the phone contract) with the same 40 percent margin that’s $200 every 18 months or $133 per year. Apple makes far more money selling us iPhones than iMacs. If the phone replacement cycle shortens then Apple makes even more money from us.
No wonder Apple has little nostalgia for the Mac.
And given that the company dropped the word computer from its name back in 2007, they’ve seen this coming for a long, long time.
This year we’ll see an important structural change take place in the PC hardware market. I’m not saying there won’t still be desktop and notebook PCs to buy, but far fewer of us will be buying them. This tipping point has already come and gone and all I am doing here is pointing out its passage.
If you read about some Wall Street analyst attributing declining PC sales to the bad economy at, say, Dell or HP, well they are just being dumb. It’s not the economy, stupid. It’s a whole new market.
It should be no surprise, then, that Apple -- a company known for its market timing -- has just started shipping a new Mac Pro. That amazing computer is overkill for 95 percent of the desktop market. It represents the new desktop PC archetype, which is a very expensive hugely powerful machine tightly aimed at the small population of professional users who still need a desktop. Unless you are editing HD video all day every day, you don’t need a new desktop PC.
What the rest of us will get are new phones and whole new classes of peripherals. The iPhone in your pocket will become your desktop whenever you are within range of your desktop display, keyboard and mouse. These standalone devices will be Apple’s big sellers in 2014 and big sellers for HP and Dell in 2015 and beyond. The next iPod/iPhone/iPad will be a family of beautiful AirPlay displays that will serve us just fine for at least five years linked to an ever-changing population of iPhones.
Apple will skim the cream from the AirPlay display market, too, until it is quickly commoditized at which point the big PC companies can take over while Cupertino concentrates on the higher-margin iPhones.
Now imagine you no longer need or even want a desktop or notebook PC. What that means is your life becomes even more phone-centric with the result that your inclination to upgrade is further accelerated. Move the upgrade slider to 12 months from 18 and what does it do to Apple’s bottom line? Exactly.
And what does it do to Microsoft’s? Intel’s? Dell’s? HP’s?
Exactly.
Tim Cook has come under a lot of criticism for not moving fast enough at Apple but frankly this structural market change simply couldn’t have happened before now. He had to wait.
2014 will be Cook and Apple’s year and I don’t see even a chance that he will blow it.
Following my #1 prediction yesterday of dire consequences in 2014 for Microsoft some readers challenged me to say what should happen this year in Redmond to right the ship. Is it even possible?
So here’s my answer which isn’t in the form of a prediction because I doubt that it will actually happen. But if it actually does come to pass, well then I told you so.
At this point in Microsoft’s history the only CEO who could follow Steve Ballmer and be more or less guaranteed to be successful is Bill Gates. I think Bill should take back his old job for a while.
Microsoft has become IBM, though not the rotten and corrupt IBM of today. Microsoft has become the IBM of the 1980s and early 90s when Steve Ballmer managed that most important customer relationship for Redmond. Ballmer learned as much from IBM as he did from Microsoft, Proctor & Gamble, or Stanford Biz School. It’s just a lot of what he learned hasn’t been that useful.
So Microsoft is today top-heavy with bad management -- managers managing managers who are managing managers -- and has for the most part lost its way. Ballmer, whom I have always liked by the way, is trying to lead like Jack Welch because he can’t lead like Bill Gates, simple as that.
Yet Ballmer is neither Welch nor Gates and that’s the problem.
He’s done a fair job of minding the business but not a very good job of minding either the culture or the technology.
If Ballmer stays on the Microsoft Board, anyone who follows him as CEO will be subject to undo criticism and a very short event horizon -- anyone that is except Bill Gates. Ballmer has no power over Gates and Gates for the most part doesn’t even care what Ballmer thinks.
But why would Bill Gates even want the top job at Microsoft? He’s moved on, after all, to curing malaria and saving the world, right?
Bill Gates would take the job on the right terms if it allowed him to realize a goal and imitate Steve Jobs.
Bill always admired Steve and marveled at his ability to inspire workers. Bill also pretty much gave up the idea that he could ever compete with Steve. But I think the current situation might change that. Bill might just now be able to out-Steve Steve.
The problem is Windows. The unified code base between desktops, tablets and phones was a mistake. Apple has OS X and iOS -- two different code bases -- for a very good reason. A phone is not a PC and a PC is not a phone. But to this point Microsoft has been too proud, too stupid, and too caught up in its own internal nonsense to admit this.
The only person who could cut through the crap at Microsoft and fix this mess is Bill.
Take the job for one year and $1 with the goal of delivering two new operating systems in 365 days -- Windows 9 and Microsoft Phone.
Bill has mellowed and matured in his time away. He’s simply a better person than he was and I think a better leader, too. Nor are the troops as over-awed as they used to be, which is good.
Bill could do this, and by doing so earn even more money for his foundation.
But will he?
Here is my first of two prediction columns for 2014. There’s just too much for it all to fit in one column. My neighbor and good friend Avram Miller wrote a predictions column this year that’s quite good and you might want to read it before this one. We discuss some of the same things though of course Avram and I occasionally agree to disagree.
This column is mainly about business predictions for 2014 while the follow-up column will be more about products and technologies.
#1 -- Microsoft gets worse before it gets better. Ford CEO Alan Mulally, who already owns a home in Seattle, announced just today that he is staying with Ford through 2014 and absolutely positively won’t be the next CEO of Microsoft. This firm statement is in contrast with his kinda-sorta firm denials before today. What this means to me is that Mulally was in hard discussions about the Microsoft job but walked away from the deal. Since there’s a clock ticking on Ballmer’s retirement someone will get the position but that someone will now probably be an insider, possibly Stephen Elop.
This is terrible news for everyone, even for the people who made it inevitable -- Steve Ballmer and Bill Gates. Mulally would have taken the job had Ballmer and Gates resigned from the Microsoft board. They wouldn’t and so he didn’t, the result being more palace intrigue and behind-the-scenes micromanaging not to good effect. Whoever gets the top job won’t have the power to do what’s needed and probably won’t have the job for long.
Microsoft’s future lies in the enterprise and the quicker it gets out of consumer products the better for the company, but a weak CEO won’t be able to move fast enough.
We’ll revisit this one next year when Mulally may again be on the short list.
#2 -- IBM throws in the towel. Any minute some bean counter at IBM is going to figure out that it is statistically impossible for the company to reach its stated earnings-per-share goal of $20 for 2015. Cutting costs, buying revenue, repurchasing shares and short-changing both customers and employees no longer adds-up to enough financial power to get the job done. This will lead to a management crisis at Big Blue. On top of that throw half a dozen customer lawsuits over bungled projects and it doesn’t look good for the regime of Ginni Rometty.
Can she pivot? That’s the question. Rometty has been trying to follow her predecessor Sam Palmisano’s playbook but it isn’t working. She needs a new strategy. This is actually a great opportunity for both Rometty and IBM, but the second half of this prediction is they’ll blow it. Rometty and IBM will survive 2014 but it won’t be pretty.
#3 -- BlackBerry to Microsoft. Assuming Elop gets the top job at Microsoft (not at all a shoo-in) he’ll approach the enterprise play from a mobile angle and that means buying BlackBerry. Microsoft will get enough patents from the deal to further enhance its revenue position in the Android market (you know Microsoft gets royalties from Android phones, right?). Redmond will get a great R&D facility in Waterloo and thousands of super-smart employees. I think this will happen, Elop or not. The only alternative purchasers are Intel and Qualcomm and I don’t see Microsoft doing that.
#4 -- Intel does ARM, kinda. The idea that Intel would go back to building ARM processors is supposed to be a big deal but I don’t see it happening without some external push. Remember Intel has been down this route before, eventually selling its StrongARM operation to Marvell. What’s key here is that Apple needs to dump Samsung so it’ll force Intel to fab its A-series processors by threatening to stop buying desktop and notebook CPUs. It’s not as big a deal as it sounds except that Samsung will be losing its largest customer.
#5 -- Samsung peaks. With Apple gone and Samsung phone margins eroding, what’s the company to do? 4K TVs aren’t it. Samsung needs to actually invent something and I don’t see that happening, at least not in 2014.
#6 -- Facebook transforms itself (or tries to) with a huge acquisition. I wrote long ago that we’d never see Facebook in the Dow 30 Industrials. The company is awash in users and profits but it's lost the pulse of the market if it ever had it. Trying to buy its way into the Millennial melting data market Facebook offered $3 billion for Snapchat, which turned it down then rejected a $4 billion offer from Google. Google actually calculates these things, Facebook does not, so where Google will now reverse-engineer Snapchat, Facebook will panic and go back with the BIG checkbook -- $10+ billion. If not Snapchat then some other overnight success. Facebook needs to borrow a cup of sugar somewhere.
#7 -- Cable TV is just fine, thank you. Avram thinks cable TV will go all-IP. This is inevitable and in fact I wrote about it the first time at least eight years ago. Cable companies already make all their profit from Internet service so why do anything else? But not this year. That’s one for 2016. For the moment cable advertising is in resurgence and these guys aren’t going to make any significant changes while they are still making money. Look for more cable industry consolidation but nothing revolutionary… yet.
#8 -- The Netflix effect continues, this time with pinkies raised. Hollywood is for sale. Nothing new there: Hollywood has always been for sale. Remember when it was Sony buying Columbia Pictures that was supposed to change entertainment forever? How did that work out? Now it’s Netflix blazing a new trail for content creation that threatens the old models except it doesn’t. Netflix knows from its viewer logs who likes what and can therefore make original programming that’s reliably popular. Amazon, in contrast, asks its users what they like and hasn’t been nearly as successful as Netflix at original content. Amazon will learn in time (it always does) but one of the things it’ll be learning is that people lie about what they like, though not about what they watch. Hollywood already knew that. Amazon doesn’t need to buy Netflix to learn this lesson, so I seriously doubt that Netflix is going to be in play. But Hollywood itself will very much be in play… in 2016.
#9 -- What cloud? The cloud disappears. My old friend Al Mandel once told me "The step after ubiquity is invisibility". This means that once everyone has something it becomes a given and gains commodity status along with dramatically lowered profit margins. In the 240 suggested predictions I saw over the last few days from readers almost nobody used the word cloud. It has effectively become invisible. Every IT startup from here on will rely strongly on the cloud but it won’t be a big deal. Commoditization will have cloud providers competing mainly on price. This means there are unlikely to be any significant new entrants to this space. The cloud opportunity, such as it was, has come and gone.
#10 -- Smart cards finally find their place in America. I covered smart cards in Electric Money, my PBS series from 2001, yet they still aren’t popular in the USA. Smart cards, if you don’t know it, are credit or debit cards with embedded RFID chips that impart greater security though at a cost. They’ve been popular in Europe for 15 years but American banks are too cheap to use them... or were. The Target data breach and others will finally change that in 2014 as the enterprise cost of insecurity becomes just too high even for banks Too Big to Fail.
Look for 10 or so more product predictions tomorrow.
Edward Snowden says (according to Reuters) that RSA Security accepted $10 million from the National Security Agency in exchange for installing (or allowing to have installed) a secret backdoor so the NSA could decrypt messages as it pleased. Hell no says RSA (a division of storage vendor EMC), stating in very strong terms that this was not at all the case. But then in a second day look at the RSA/EMC statement bloggers began to see the company as dissembling, their firm defense as really more of a non-denial denial. So what’s the truth here and what’s the lesson?
For the truth I reached deep into the bowels of elliptic cryptography to an old friend who was one of the technology’s inventors.
"RSA is lying," said my friend. "No room for ambiguity on this one. The back-doored RNG was a blatantly obvious scam and they made it the default anyway".
My friend has no reason to lie and every reason to know what’s what in this tiny corner of technology, so I believe him. Besides, the Snowden revelations have all proven true so far.
What’s with EMC, then?
Forget for a moment about right and wrong, good or evil and think of this in terms of a company and one of its largest customers -- the US Government. It’s more than just that $10 million NSA payday EMC has to see as being at risk. With the Obama Administration’s back against the wall on this one, EMC has to see its entire federal account as endangered.
That’s the only reason I can imagine why an NSA contractor would say that they didn’t know the backdoor existed (we are incompetent, hire us) or that once they did know it existed they waited years to do anything about it.
These are not the kind of admissions corporate PR wants to make unless; a) they are being forced to do it, or; b) the real truth is even worse.
I’m guessing that EMC sees itself as taking one for the team. The problem, of course, is what team are they on? It certainly doesn’t seem to be that of the American people.
Full disclosure is the best course here and if full disclosure is prohibited by security regulations and spook laws then the thing to do is to get out of the business. I’m serious. EMC could and probably should simply resign the NSA account, which would say more about this case than any detailed explanation.
There was a time when "activist investor" Carl Icahn actually owned and ran businesses, one of which was TransWorld Airlines (TWA), eventually sold to American Airlines. In an attempt to cut costs, TWA under Icahn outsourced reservation service to a call center built in a prison with prisoners on the phone. When you called to book travel you were giving your credit card number to a felon and telling him when you’d be away from home. Smart move, Carl, and very akin to what may have caused the post-Thanksgiving theft of 40 million credit card numbers from Target, the U.S. discount retailer.
Target used to do its IT all in the USA, then to save costs they moved IT to a subsidiary in India. Care to guess where the Target data breach came from? I’m guessing India. I’m also guessing that there will never be any arrests in the case.
It could have started anywhere, I suppose. Certainly there are plenty of thieves in the USA. But the possible link to offshoring can’t be ignored. Most big U.S. corporations have some IT work being done offshore. This greatly limits oversight and introduces huge new risks to their businesses -- risks that are consistently underestimated or even ignored. The data that runs these businesses and most financial transactions are in the hands of workers over whom the America customer has little management control and almost no legal protection. Even the ability to verify skills or do real background checks is difficult.
But offshoring is far from Target’s only mistake. Target CIO Beth Jacob, whose background is in operations, not IT, told ZDnet last month that Target was especially proud of its quick customer Point of Sale experience. That suggests a lot of IT attention to POS, which of course is exactly where the credit cards were grabbed.
Mary Alyce, my young and lovely wife, was in our local Target store yesterday and saw them replacing every POS terminal in the place. No pun intended.
Let’s guess what actually happened at Target sometime around November 15th. There are a couple concepts in the management of IT systems that are relevant to this issue. The first is configuration management -- managing how you have the components of your IT shop installed, configured, etc. The second is change management -- how you manage changes to those configurations. While both concepts are important and critical to an operation like Target, it is an area where tools are sorely lacking. For either to work well there needs to be an independent process of verification and checking: if you changed something, did the change work? Was the device or system changed outside of the change process? While it is great to tout good processes, ITIL, etc. You can’t assume people will do their jobs perfectly or follow the processes to the letter. There will be mistakes and sadly, there will be mischief. How do you know when this happens? At Target I’m guessing they didn’t know until it was already too late.
Someone probably made an out of process change to Target’s POS system and nobody noticed.
This is an excellent example of why everything in your IT shop should not have access to the Internet. Clearly Target’s POS terminals had access to the Internet. If they were on a secured private internal network, this crisis may not have been possible. Just because a machine has an ethernet connection, it doesn’t mean it should have connectivity to the Internet.
One final question: Where is the NSA in all this? Are they using all their technology to investigate and deal with this crisis? I don’t think so. This attack on Target is nothing less than a major cyber attack on the USA banking system. Show me the metadata!
It’s hard to believe sometimes, but I began writing my columns -- in print back then -- during the Reagan Administration. It was 1987 and the crisis du jour was called Iran-Contra, remember it? Colonel Oliver North got a radio career out of breaking federal law. The FBI director back then was William Sessions, generally called Judge Sessions because he had been a federal judge. I interviewed Sessions in 1990 about the possibility that American citizens might have their privacy rights violated by an upcoming electronic surveillance law. "What would keep an FBI agent from tapping his girlfriend’s telephone?" I asked, since it would shortly be possible to do so from the agent’s desk.
"It would never happen", Sessions said.
"How can you say that?"
"Why that would be illegal", the FBI director explained.
That old interview came to mind as I was thinking about the proposed NSA surveillance sanctions we’ll be hearing about this week. According to the usual leaks to major newspapers, the changes won’t be very much and bulk surveillance of American citizens will continue with maybe a privacy advocate allowed to argue before the FISA Court, though of course none of us will ever know about those secret arguments or how vigorously they are pursued.
Among the substantiated allegations against NSA operatives, by the way, is that several did snoop on their girlfriends, even though doing so was illegal.
What’s to be done then? I think real reform is unlikely so I won’t even suggest it. I’m trying to be practical here.
So just for the sake of discussion here’s my idea how to improve the status quo. Install the FISA privacy advocate, substantially beef-up the inspector-general operation at the NSA and require an annual report on its activities to the people of the USA. Beyond that I’d raise criminal penalties for violating specific privacy guidelines like snooping on past lovers using NSA data or facilities.
I’d apply the death penalty to these.
Violate the privacy of a US citizen as an NSA employee without proper reason and authorization and if you are caught then you die.
Because Judge Sessions was right, of course. Such violations would never happen.
Why that would be illegal.
An old friend has been telling me for months that the future of personal computing was coming with new Windows tablets using the Bay Trail system-on-chip architecture built with Intel Silvermont cores. Silvermont is the first major Atom revision in years and is designed to be much faster. Bay Trail would lead to $199 8-inch Windows tablets while also fixing the limitations of Intel’s previous Clover Trail. Well Bay Trail units are finally shipping but my techie friend is sorely disappointed with his.
The lure of this platform for Intel is great. Manufacturers could use the same chassis and chipsets for everything except gaming boxes and servers. Eight inch tablets, ChromeBooks, Ultrabooks, 10-inch tablets, and netbooks, all one chassis with up to 4GB of RAM and a 256GB SSD. One size fits all for home, car, travel, and work.
That’s the dream, but here’s the reality, at least so far. While the first Bay Trail tablets have the important features of SD card, HDMI, USB, and GPS, most of these are hobbled in one way or another.
In the units shipping so far from Dell, Lenovo, and Toshiba (my buddy has a Toshiba Encore), if you are charging you can’t use the single USB port for anything else. This means it can only be used as a workstation on battery power. It can only play a DVD movie from battery. This is dumb.
And not just any USB cable will do for charging. These tablets will only charge if they sense that the USB data lines are shorted so accessories cannot be connected. Even worse, if it’s charging you can’t use the tablet for anything, so no keeping it plugged-in at your desk or as a kiosk. My friend’s charger that does this has a Windows logo on it so this has to be deliberate hobbling by Microsoft. Way to go!
Accessory companies are reportedly working on a modified hub or Y-cable with a switch and a 1.8k resistor but that is not only a kludge, it could cost them their Windows certification.
And while these devices have "GPS", it’s not like you think. Most of these tablets are using the Broadcom 4752 GNSS chip and the driver provides only the location, date, and precision information, not the raw satellite information. Worse, it provides information to the Windows Locator Service only and not in the industry standard NMEA serial data stream through a virtual com port required by Windows programs like Microsoft Streets and Trips or Delorme Street Atlas.
A company called Centrafuse makes a program called Localizer to fill this gap but that also adds $15 to the cost. The bigger problem is very few consumers will even know there is a gap… until they are in their cars and lost.
This platform appears to have real potential yet Microsoft -- not Intel -- has deliberately hobbled it. Why? I’m guessing there’s another version coming for a little more money that will unlock these features, making them more usable.
But not in time for this Christmas.
On my home page you’ll always see a link to Portrait Quilts, my sister’s website where for several years she has sold quilts, pillows, and tote bags printed with customer photographs. This is how she makes her living, selling on the web and through photo stores. Buy one, please. Or if you are a quilter she’ll print your photos on cloth so you can quilt them yourself.
Then approximately three months ago Google decided that Portrait Quilts does not exist.
You can find a Google listing for portraitquilts.com, if you search for that specific string, but if you look for photo quilts or any similar search term, Portrait Quilts -- which for years was always the top result -- no longer appears. My sister’s web traffic and her income were instantly and dramatically reduced for her biggest season of the year, Christmas.
I’m her big brother and feel protective but I didn’t know about it until this week when I came for a visit. You see in addition to making quilts day and night my sister has been caring for our 89 year-old mother who was just diagnosed with a particularly nasty kind of cancer. Mom begins chemotherapy on Thursday. Pray for her.
To whom do you complain at Google, a company that prefers machine-to-machine communication? Nobody, it seems, because my sister has tried. You’d think if they were going to claim 70 percent of America’s search traffic they’d at least do it fairly, but no. Nobody at Google will respond.
There are plenty of search engine optimization companies that will take my sister’s money and pretend to do something about this problem, but when you press them to explain their techniques it always comes down to bullshit.
No search engine optimization company can make Google index something Google doesn’t want to index.
She could always buy traffic with Google AdWords, but wouldn’t that be extortion? I mean there simply aren’t that many photo quilt companies on the Internet and for a big one to disappear entirely from Google and have to buy its way back on, well that sure sounds like a racket to me.
But wait, there’s more!
My little sister is not without friends. She is, in fact, quite well known in the computer, software, and Internet industries and not for her quilts or for being my little sister. She founded PC Data, the largest PC market research firm, now part of the NPD Group. My little sister knows people who know people.
One of those people is Gordon Eubanks, ex-Digital Research and later CEO of Symantec. Gordon offered to take her complaint to his buddy Eric Schmidt, chairman of Google.
Eric Schmidt said he was sorry but he couldn’t help.
Now I’m angry.
If you are an Internet entrepreneur who has been similarly dissed by Google, let me know. If you found a way to resolve a similar problem, let me know. If you are one of the hundreds of Googlers who read this column, tell me how to fix this.
Because up until a couple months ago this looked like it was going to be a blockbuster Christmas for Portrait Quilts. Not anymore.
Last week I began this series on large companies in turmoil by looking at Intel, which I saw trying to guarantee its future through enlightened acquisitions that actually emulated this week’s company -- Cisco Systems.
So if Cisco already knows how to assimilate other companies and technologies to stay ahead of the market, how can it have a problem? Cisco’s problem is its market is mature and being commoditized with all boats sinking. And this time there isn’t an obvious new idea to buy.
Cisco is becoming a very expensive utility appliance. The revenues streams at Cisco that are at risk:
Cisco is old and tired according to a friend who worked there. "When I was at Cisco during 2000," he said, "anyone with more than four years of service was an odd duck, you cashed out with your options after four years. Today I still see the same people from 2000 and wonder, 'What are you doing?' Everything is static and big corporate, the juice went out the door in 2000".
The core problem for Cisco is customers are beginning to ask why they should pay $550 per month for a T1 with a $3,000 router when you can have DOCSIS 3.1 at $99 and a $300 router with VPN tunneling to create the private VPN WAN network?
For that matter, why buy complex very large and expensive switches from Cisco when Wi-Fi at 1.3Gbps is available?
Why pay $1,000 for a Cisco VoIP phone at your desk? Who is calling you at your desk anymore?
Flip cameras were a bust: the profit margins were too low. Ditto for Linksys. Cisco isn’t selling many of those $300K telepresence rooms, either -- another market being gobbled from beneath.
The Cisco IP everything vision has run out of steam, as has much of Cisco’s technology. To look under the hood, the Cisco Next Generation routers still use a single threaded OS to support all functions even with a dual core on the router. The same is true with the edge and enterprise switches.
Very old operating system technology is still running the Cisco IP world. Cisco has just started to move into Linux, (Nexus) as the OS and just starting to become stable after five years.
Worst of all, software defined networks (SDN) are poised to eliminate entire Cisco product lines, replacing them with commodity equipment running open source software.
This does not mean Cisco is doomed. With $48 billion in cash they can buy a new future or two. It’s just that their old markets are fading fast and new markets of big enough size just aren’t emerging.
It doesn’t look good.
This is the first in a series of columns on the strategic direction of several major technology companies that have faltered of late. We’ll start here with Intel, follow in a couple days with Cisco, followed by Microsoft, then see where it goes from there.
At Intel’s annual shareholders’ meeting last week the company talked about moving strongly into mobile chips and selling its stillborn OnCue over-the-top video streaming service, but the most important story had to do with expanding Intel’s manufacturing capacity. This latter news is especially important because if you look at the square footage of 14 nanometer fab facilities Intel says it will be bringing online in the next two to three years it appears that the company will shortly have more production capacity than all the rest of the semiconductor industry combined.
Not just more 14 nm production capacity, we’re talking about more total production capacity than all Intel competitors together.
This is fascinating news for several reasons. First, at $5+ billion per fab you don’t add 3-4 new ones without a darned good reason for doing so. Second, it takes so long to plan and build these plants that this part of Intel’s strategic plan had to have been in the works for years before the company ever mentioned it in public. So it’s not like new Intel CEO Brian Krzanich moved into his office and said, "Let’s build some new fabs".
That’s not to say Krzanich has been without impact on the company. He clearly (and properly) killed OnCue, which was Intel’s third try at building a media empire and wasn’t any more thoughtfully done than the first two failures. Watch for Intel to sell OnCue to Verizon for $500 million as planned, but with sweetheart financing from Intel Capital so it doesn’t cost Verizon a penny upfront.
This is not a criticism, Brian: I’d do the same thing.
Increasing the fab capacity makes sense, too, though maybe not as much sense as it made a few years ago when the idea was first presented. Then PC sales were still growing and excess production capacity promised the sort of low prices that could finally kill the hated AMD.
Only today AMD isn’t that much of a threat. There are other players like Qualcomm in the mobile space and even Apple that are far scarier than AMD.
So what’s a Krzanich to do? He’s making the best of a difficult situation.
Understand that Intel is far and away the best semiconductor manufacturer the world has ever known. Its only real competitor in manufacturing is TSMC in Taiwan and even that’s not a close race. Intel has the best technology, the best yields, and because of the way things work in the semiconductor industry, it has the lowest manufacturing cost per chip.
What it doesn’t have, however, is the best mix of chips to build. Desktops are in decline, the market is all GPUs and mobile with a huge flash RAM opportunity on the horizon -- all areas where Intel is at a disadvantage.
So Krzanich has Intel looking into building chips for others, entering the foundry business. On the face of it this move looks really, really stupid. But the more I think about it the more sense it actually makes.
What’s stupid about competing with TSMC and others in the foundry business is the profit margins aren’t good at all. Typically a larger fabless semiconductor company will pay around $6,000 for each foundry-processed wafer, yet the same wafer filled with Ivy Bridge processors can easily generate $400,000+ in sales for Intel. That’s the whole idea behind vertical integration -- to take all the profit from every stage of the business.
Admittedly there’s plenty of R&D and other expenses that need to be covered by that $400,000 wafer, but the idea of making it all yourself to reap all the benefit makes sense… except if you don’t know what to make.
That’s Intel’s problem. It can build the darned things better than anyone else but it doesn’t necessarily have the right product mix to build for the current market. And even if it gets its product mojo back tonight that won’t have much effect on its business for another two to three years.
Brian Krzanich can’t wait two to three years for new mojo. He needs mojo right now.
And that’s why Intel is suddenly interested in the so-called foundry business. It is soon to have excess 22 nm and then 14 nm production capacity and will be able to easily undercut any other manufacturer at those feature sizes, all the while offering superior performance. Traditionally this would lead to a market bloodbath with most Intel foundry competitors dying and production costs eventually going back up as companies fail and capacity is cut back. But I don’t think that is what’s happening in this case.
If Intel drove the per-wafer price up to $12,000 or even $20,000 it wouldn’t make enough difference to Intel’s bottom line to be worth the anti-trust risk. They’ll still be cheaper, just a little bit cheaper.
I believe Intel is entering the foundry business mainly as an industrial intelligence operation.
As a chip company exclusively manufacturing its own designs Intel competes with most of those fabless semiconductor companies, but as a foundry -- especially the cheapest best foundry around -- those same companies will open their kimonos to Intel (with strict NDAs in place first, of course).
But this time the NDA doesn’t matter in this case because Intel’s purpose isn’t to steal trade secrets -- it is to find companies to buy. Once you buy the company the NDA dissolves.
Intel needs new product lines sooner than Santa Clara can design them itself. More importantly, having rightly lost some confidence in its ability to predict and lead the market, Intel needs a few astounding ideas from outside and this is by far the easiest way to find those.
Suddenly Intel manufacturing engineers will have a view of the chip market they never had before. Find the best products that are close to market, open the checkbook and buy them up. What better way to find those new products that really work than by manufacturing them in the first place?
If I am right, Intel is emulating Cisco’s 1990s strategy of buying ahead of the next technology wave, though in this case leveraging its superior fab technology to figure out that next wave.
It might even work.
Reprinted with permission
The latest Edward Snowden bombshell that the National Security Agency has been hacking foreign Google and Yahoo data centers is particularly disturbing. Plenty has been written about it so I normally wouldn’t comment except that the general press has, I think, too shallow an understanding of the technology involved. The hack is even more insidious than they know.
The superficial story is in the NSA slide (above) that you’ve probably seen already. The major point being that somehow the NSA -- probably through the GCHQ in Britain -- is grabbing virtually all Google non-spider web traffic from the Google Front End Servers, because that’s where the SSL encryption is decoded.
Yahoo has no such encryption.
The major point being missed, I think, by the general press is how the Google File System and Yahoo’s Hadoop Distributed File System play into this story. Both of these Big Data file systems are functionally similar. Google refers to its data as being in chunks while Hadoop refers to blocks of data, but they are really similar -- large flat databases that are replicated and continuously updated in many locations across the application and across the globe so the exact same data can be searched more or less locally from anywhere on Earth, maintaining at all costs what’s called data coherency.
Data replication, which is there for reasons of both performance and fault tolerance, means that when the GCHQ in London is accessing the Google data center there, they have access to all Google data, not just Google’s UK data or Google’s European data. All Google data for all users no matter where they are is reachable through any Google data center anywhere, thanks to the Google File System.
This knocks a huge hole in the legal safe harbor the NSA has been relying on in its use of data acquired overseas, which assumes that overseas data primarily concerns non-US citizens who aren’t protected by US privacy laws or the FISA Court. The artifice is that by GCHQ grabbing data for the NSA and the NSA presumably grabbing data for GCHQ, both agencies can comply with domestic laws and technically aren’t spying on their own citizens when in fact that’s exactly what they have been doing.
Throw Mama from the train.
If Google’s London data center holds not just European information but a complete copy of all Google data then the legal assumption of foreign origin equals foreign data falls apart and the NSA can’t legally gather data in this manner, at least if we’re supposed to believe the two FISA court rulings to this effect that have been released.
This safe harbor I refer to, by the way, isn’t the US-EU safe harbor for commercial data sharing referred to in other stories. That’s a nightmare, too, but I’m strictly writing here about the NSA’s own shaky legal structure:
According the the Foreign Intelligence Surveillance Court of Review: …the Director of National Intelligence (DNI) and the Attorney General (AG) were permitted to authorize, for periods of up to one year, “the acquisition of foreign intelligence information concerning persons reasonably believed to be outside the United States” if they determined that the acquisition met five specified criteria. Id. These criteria included (i) that reasonable procedures were in place to ensure that the targeted person was reasonably .believed to .be located outside the United States; ( ii) that the acquisitions did not constitute electronic surveillance; 2 (iii) that the surveillance would involve the assistance of a communications service provider [I hate to jump in here, but Google says they didn't know about the data being taken, so can this assistance be unknowing or unwilling? -- Bob]; (iv) that a significant purpose of the surveillance was to obtain foreign intelligence information; and (v) that minimization procedures in place met the requirements of 50 U.S.C. § 1801(h)
This is a huge point of law missed by the general news reports -- a point so significant and obvious that it ought to lead to immediate suspension of the program and destruction of all acquired data… but it probably won’t.
That probably won’t happen because Congress seems hell-bent on quickly passing an intelligence reform bill that not only doesn’t prohibit these illegal activities, the bill seems to give them a legal basis they didn’t have before.
Some kind of reform, eh?
This news also blows a hole in the argument that these agencies are gathering data mainly so they’ll be able to retrospectively analyze after the next terrorist attack as was done right after the Boston Marathon bombings. If we already have after-the-fact access to historical data through this hack, why bother even gathering it before?
The other part of this story that’s being under-reported I think is exactly how the GCHQ is gaining access to Google and Yahoo data? A cynical friend of mine guesses it is happening this way:
"The NSA probably has a Hadoop system set up and linked to Google’s. All data that goes onto Google’s network is automatically replicated on the NSA system. Heck that Hadoop system is probably sitting in Google’s data center. You don’t need to move the data. You just need to access a copy of it. It would not surprise me if this is being done with Google, Yahoo, Microsoft, Facebook, Twitter, … The government has probably paid each of them big bucks to set up, support, and manage a replica of their data in their own data centers".
I think my friend is wrong because I can’t see either Google or Yahoo being stupid enough to help such a process occur. The associated revenue isn’t enough to be worth it for either company.
GCHQ could get the data from a network contractor like BT. Or they could do it themselves by physically tapping the fibers. There is a technique where if you bend individual fibers into a tight loop (tighter than the reflected angle) some light escapes the fiber and can be harvested with a detector and the unencrypted data read. All it looks like to the network is a slight signal attenuation. But given that cable bundles hold at least 148 fibers each, such physical extraction would require a unit the size of a refrigerator installed somewhere.
I doubt that the NSA and GCHQ are grabbing the signals from cables between data centers. Rather they are probably grabbing the signals from cables within the data centers -- still unencrypted despite Google’s recently expanded encryption system. I’d bet money on that. These data centers tend to be leased buildings and I’m sure some royal is the beneficial owner of the UK facility and has access to the physical plant…
But this is all just speculation and will probably have to remain so as both governments do all they can to rein-in public debate.
My concern is also with what happens down the road. A lot of this more aggressive NSA behavior came in with the Patriot Act and has become part of the agency’s DNA, raising the floor for questionable practices. So there is less a question of what wrong will they do with this capability than in what direction and how far will they extend future transgressions?
This GCHQ business also feels to me like it may have come from the Brits and simply fallen in the lap of the NSA. If not, then why would they be simultaneously fighting the FISA court for the same information from domestic sources?
One thing I find ironic in the current controversy over problems with the healthcare.gov insurance sign-up web site is that the people complaining don’t really mean what they are saying. Not only do they have have little to no context for their arguments, they don’t even want the improvements they are demanding. This is not to say nothing is wrong with the site, but few big web projects have perfectly smooth launches. From all the bitching and moaning in the press you’d think this experience is a rarity. But as those who regularly read this column know, more than half of big IT projects don’t work at all. So I’m not surprised that there’s another month of work to be done to meet a deadline 5.5 months in the future.
Yes, the Obama Administration was overly optimistic and didn’t provide enough oversight. Yes, they demanded fundamental changes long after the system design should have been frozen. But a year from now these issues will have been forgotten.
The most important lesson here for government, I’d say, is to be more humble. At the heart of the federal problems you’ll find arrogance. California and Kentucky, after all, are offering identical services that are reportedly going well: why didn’t the Feds learn from them? I’d guess it’s because they felt they had nothing to learn from the provinces.
"Not all smart people work at Sun Microsystems," Bill Joy used to say. And not all smart people work for the US Government, either.
I’d especially keep in mind that the people who are most upset about site performance issues are those who oppose its very existence. They are glad the problems are happening -- happy to complain. It’s not like they want the site to actually be more usable. I’d venture to say that these critics, while demanding specific improvements, would really be happier if those improvements didn’t happen. They are hypocrites.
Meanwhile there’s another vastly bigger and more serious point being missed here, I think, and that’s the role of Big Data in this whole US healthcare fiasco. This was explained to me recently by Jaron Lanier, by the way, so I don’t claim having thought of it.
Jaron is my hero.
His point is that Big Data has changed the US health insurance system and not for the better. There was a time when actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata -- data about data -- because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale so health insurance companies wanted as many policyholders as possible, making a profit on most of them.
Then in the 1990s something happened: the cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis. This moved the health insurance business from being based on setting rates to denying coverage. In the US the health insurance business model switched from covering as many people as possible to covering as few people as possible -- selling insurance only to healthy people who didn’t much need the healthcare system.
The goal went from making a profit on most enrollees to making a profit on all enrollees. Since in the end we are all dead, this really doesn’t work as a societal policy, which most of the rest of the world figured out long ago.
US health insurance company profits soared but we also have millions of uninsured families as a result.
Given that the broad goal of society is to keep people healthy, this business of selling insurance only to the healthy probably can’t last. It’s just a new kind of economic bubble waiting to burst.
Some might argue that the free market will eventually solve this particular Big Data problem. How? On the basis of pure economic forces I don’t see it happening. If I’m wrong, please explain.
Tell us all in detail how this will work.
Image credit: martan/Shutterstock
My friend Nikola Bozinovic (say that three times fast) is a very sharp software developer originally from Serbia who has, over the years, worked for most of the usual suspect American software companies. He is also the guy who restored from a grotty old VHS tape my film Steve Jobs — The Lost Interview. And as of this week he’s the CEO of Mainframe2, an exciting startup strutting its stuff at the DEMO conference in Santa Clara.
Mainframe2 claims it can put almost any Windows application into the cloud, making apps usable from any device that can run a web browser supporting html5. We’re talking Photoshop and AutoCAD on your iPad. This is a big deal.
Normally moving an app from a PC to a server and then virtualizing it in the cloud is a multistep process that can take weeks or months to get running smoothly but Nikola says Mainframe2 can do the job in about 10 minutes. The application code runs across many virtual machines in the cloud and -- this is especially important -- supports nVIDIA’s virtual GPU standard, so graphics performance is especially strong. And that’s the point, because it’s graphically-intensive apps like video editing that Mainframe2 will be targeting from the start when its service becomes commercially available later this fall.
Here’s what I find exciting about this. First, it’s cross-platform. The apps are (so far) all Windows, but the user can be on a Mac or anything else that supports html5. Next Mainframe2 appears to use an application rental model. I use Photoshop maybe six times per year so renting makes a lot more sense than owning. Renting the software I can pay a few dollars rather than hundreds. I don’t have to worry about keeping the application current. I don’t even need a powerful computer, since all the crunching takes place in the cloud.
Need 100 virtual GPUs for your iPhone? Okay.
If I used Photoshop all day every day of course I’d want my own local copy, but even then I can see emergencies happening where being able to edit on a smart phone might save the day.
VentureBeat wrote about Mainframe2 an hour or so ago and they complained about latency. It’s probably there but as an idiot I just don’t care that much and Nikola says it won’t be noticeable by the time the service is widely available. I believe him and here’s why. Mainframe2 sends the screen image as an H.264 video stream, so there’s some challenge to encode commands going in one direction, run the app, then send encoded video back. But that’s nothing compared to Nikola’s last job which was processing commands from pilots sitting in Oklahoma flying Predator drones in Afghanistan then cleaning-up the return video signal so the pilots could see what they were shooting at, all in real time of course and over a multi-link satellite connection.
Compared to that, remote AutoCAD is easy.
Mainframe2 is a form of remote computing but it isn’t VNC or RDP, it isn’t VMware or Citrix, it’s something totally new that can scale power like crazy (as many CPUs and GPUs as you like) which those others can’t. I wonder what people will end up doing with it? Because I’m sure this will tap a vein of creativity in the user community.
I wonder, too, whether the software vendors will love it or hate it? Probably both. Remember this application-in-a-browser idea was what turned Bill Gates and Microsoft against Netscape in the mid-1990s.
Of course all is blissful when it’s still a demo rather than a shipping service, but there is very solid technology here that deserves a look.
And no, I don’t own any shares in the company, though I wish I did.
Reprinted with permission
Image Credit: Khakimullin Aleksandr/Shutterstock
No law is more powerful or important in Silicon Valley than Moore’s Law -- the simple idea that transistor density is continually increasing which means computing power goes up just as costs and energy consumption go down. It’s a clever idea we rightly attribute to Gordon Moore. The power lies in the Law’s predictability. There’s no other trillion dollar business where you can look down the road and have a pretty clear idea what you’ll get. Moore’s Law lets us take chances on the future and generally get away with them. But what happens when you break Moore’s Law? That’s what I have been thinking about lately. That’s when destinies change.
There may have been many times that Moore’s Law has been broken. I’m sure readers will tell us. But I only know of two times -- once when it was quite deliberate and in the open and another time when it was more like breaking and entering.
The first time was at the Computer Science Lab at Xerox PARC. I guess it was Bob Taylor’s idea but Bob Metcalfe explains it so well. The idea back in the early 1970s was to invent the future by living in the future. "We built computers and a network and printers the way we thought they’d be in 10 years," Metcalfe told me long ago. "It was far enough out to be almost impossible yet just close enough to be barely affordable for a huge corporation. And by living in the future we figured out what worked and what didn’t".
Moore’s Law is usually expressed as silicon doubling in computational power every 18 months, but I recently did a little arithmetic and realized that’s pretty close to a 100X performance improvement every decade. One hundred is a much more approachable number than 2 and a decade is more meaningful than 18 months if you are out to create the future. Cringely’s Nth Law, then, says that Gordon Moore didn’t think big enough, because 100X is something you can not only count on but strive for. One hundred is a number worth taking a chance.
The second time we broke Moore’s Law was in the mid-to-late 1990s but that time we pretended to be law abiding. More properly, I guess, we pretended that the world was more advanced than it really was, and the results -- good and bad -- were astounding.
I’m talking about the dot-com era, a glorious yet tragic part of our technological history that we pretend didn’t even happen. We certainly don’t talk about it much. I’ve wondered why? It’s not just that the dot-com meltdown of 2001 was such a bummer, I think, but that it was overshadowed by the events of 9/11. We already had a technology recession going when those airliners hit, but we quickly transferred the blame in our minds to terrorists when the recession suddenly got much worse.
So for those who have forgotten it or didn’t live it here’s my theory of the euphoria and zaniness of the Internet as an industry in the late 1990s during what came to be call the dot-com bubble. It was clear to everyone from Bill Gates down that the Internet was the future of personal computing and possibly the future of business. So venture capitalists invested billions of dollars in Internet startup companies with little regard to how those companies would actually make money.
The Internet was seen as a huge land grab where it was important to make companies as big as they could be as fast as they could be to grab and maintain market share whether the companies were profitable or not. For the first time companies were going public without having made a dime of profit in their entire histories. But that was seen as okay -- profits would eventually come.
The result of all this irrational exuberance was a renaissance of ideas, most of which couldn’t possibly work at the time. While we tend to think of Silicon Valley being built on Moore’s Law making computers continually cheaper and more powerful, the dot-com bubble era only pretended to be built on Moore’s Law. It was built mainly on hype.
In order for many of those 1990s Internet schemes to succeed the cost of server computing had to be brought down to a level that was cheaper even than could be made possible at the time by Moore’s Law. This was because the default business model of most dot-com startups was to make their money from advertising and there was a strict limit on how much advertisers were willing to pay.
For a while it didn’t matter because venture capitalists and then Wall Street investors were willing to make up the difference, but it eventually became obvious that an Alta-Vista with its huge data centers couldn’t make a profit from Internet search alone.
The dot-com meltdown of 2001 happened because the startups ran out of investors to fund their Super Bowl commercials. When the last dollar of the last yokel had been spent on the last Herman Miller office chair, the VCs had, for the most part, already sold their holdings and were gone. Thousands of companies folded, some of them overnight. And the ones that did survive -- including Amazon and Google and a few others -- did so because they’d figured out how to actually make money on the Internet.
Yet the 1990s built the platform for today’s Internet successes. More fiberoptic cable was pulled than we will ever require and that was a blessing. Web 2.0, then mobile and cloud computing each learned to operate within their means. And all the while the real Moore’s Law was churning away and here we are, 12 years later and enjoying 200 times the performance we knew in 2001. Everything we wanted to do then we can do easily today for a fraction of the cost.
We had been trying to live 10 years in the future and just didn’t know it.
Almost every week some reader asks me to write about Bitcoin, currently the most popular so-called crypto currency and the first one to possibly reach something like critical mass. I’ve come close to writing those columns, but just can’t get excited enough. So this week when yet another reader asked, it made sense to explain my nervousness. Bitcoin is clever, interesting, brilliant even, but I find it too troubling to support.
But first, why should you believe me? You shouldn’t. Though I’m year after year identified by the Kauffman Foundation as one of the top 50 economics bloggers in America, that only means I get to hang out occasionally with the real experts, eating Kansas City barbecue. Unlike them I’m not an economist, I just play one on TV. So don’t take my word for anything here: just think about the arguments I present and whether they make sense to you.
For those who don’t follow Bitcoin, it is both an electronic payment system and a currency invented by someone somewhere (nobody really knows who -- the inventor uses a pseudonym that makes some folks think he/she is Japanese but again nobody really knows). Bitcoin’s design purposefully keeps control out of the hands of central banks and governments, avoiding the threats of shutdown and confiscation.
Creating new Bitcoins can only happen once data miners have solved an algorithm called SHA256. It’s simply "here are some bytes, find a SHA256 hash of this byte array that is less than this tiny number. To make it more difficult, we progressively make the tiny number, tinier". There can be only 21 million Bitcoins ever found or mined, though once found, Bitcoins can be divided into 10^8 small subparts called shitoshis which are what’s actually used for buying things.
Bitcoins are not backed by any underlying commodity or government. There’s no full faith and credit clause behind them, but on the other hand Bitcoins are inflation-resistant because of constrained supply and can’t easily be counterfeited, either.
What makes Bitcoins have value is our assigning value to them. If I sell my house for a Bitcoin that doesn’t make a Bitcoin worth as much as my house but it creates a plausible value that can be confirmed if I can in turn use the Bitcoin to buy something else of equal or greater value to my house. And that’s the direction this currency seems to be heading, because it is being accepted some places for commerce.
If accepting Bitcoins for payment makes no sense think of those people who start with something mundane then trade and trade and trade until they have turned a paperclip into a house. This is no different.
Much of the attraction of Bitcoins comes from the efficiency with which they can be traded (by e-mail, even anonymously with no postage, taxes, or other fees attached) and their resistance to government meddling. Bitcoins are the bearer bonds of cyber currencies.
All this is good we’re told. Bitcoins are in some ways analogous to gold, which is also seen as having enduring intrinsic value.
So why then do I have doubts? I’ll lay out a bunch of reasons here in no particular order.
1) Bitcoins consistently cost more to generate, find or mine, than they fetch on the open market. People way smarter than me have figured this out and you can see their analysis here (it’s for Litecoins, not Bitcoins, but the same forces are at work). So maybe Bitcoins are analogous to gold, but gold that’s worth less than the cost of production.
This is further confirmed by the robust cottage industry in Bitcoin mining hardware. Mining Bitcoins means running millions of calculations until one of a finite number of successful answers is found. These calculations were first done on CPUs then GPUs then FPGAs and now ASICs. For under $200 you can buy a screaming little Bitcoin mining machine but it won’t earn you $200 in Bitcoins unless they dramatically increase in value down the road. This happens from time to time (the increase in value) but it still doesn’t make sense to build when you can buy for less. So Bitcoins as a production commodity make no sense.
You have to ask yourself why people would sell Bitcoin generators? Why don’t they just use the generators themselves to find more Bitcoins? Because it consistently costs more than a dollar to mine a dollar’s worth of Bitcoins, that’s why and the comparison to gold falls apart.
This is a familiar story with mining. Remember during the California Gold Rush the great fortunes made were those of Crocker (a banker, not a miner) and Stanford (a storekeeper and again not a miner). The only great American fortune ever based on gold mining, in fact, was that of William Randolph Hearst, whose father started the Homestake Mining Company that endures today. Notice, however, that Hearst (the son) wisely decided to diversify his fortune into media and starting small wars.
2) Bitcoins, while possibly uncrackable are definitely not unhackable. Mining Bitcoins requires the validation of 90 other random miners before your Bitcoins are judged real and assignable, but what’s to keep me from owning 90+ Bitcoin mining accounts and gaming the system? Admittedly it’s not that easy: In practical terms I’d need a majority of the world’s mining nodes to make that scam stick and in a rapidly growing market that kind of concentration is difficult to achieve. But it can be done -- especially if nation-states are involved. What if China or Russia or the NSA threw its financial and computing power into BitCoin hacking -- how long would it take them to accumulate more than 50 percent of all mining nodes? What if Amazon Web Services simply assigned all unoccupied EC2 cores to this task? This is plausible enough that I think we have to expect it will be at least tried.
The Bitcoin hack, then, isn’t cornering the market in a classic sense but cornering enough nodes to control the voting.
3) Bitcoin, as the first crypto-currency, is the one that will be tested in court. Simply outlawing Bitcoins in one country won’t have that much effect on the concept, but given there are other crypto-currencies around, it might hurt Bitcoin, itself. I’d assign the tactical advantage to Litecoins, which are cheaper than Bitcoins and may be able to leverage its second-mover advantage and take the day. Google didn’t invent the search engine nor did Microsoft invent the spreadsheet, remember.
The Winklevoss brothers, who reportedly own one percent of all Bitcoins, should be concerned about being too concentrated in the currency.
4) But my biggest concern about Bitcoin stems from what’s otherwise seen as the currency’s greatest strength -- its rational foundation and apparent immunity from government meddling. To hear Libertarians talk about it, the success of Bitcoin will free us forever from the IRS, Treasury Department, and the Federal Reserve. Bitcoin, as a currency without an associated bureaucracy, is immune to political meddling so no stupid government monetary programs that backfire or don’t work are possible. Bitcoin supposedly protects us from ourselves.
That’s fine as far as it goes, but the Bitcoin algorithm has left no place for compassion, either. Governments and treasuries in times of crisis sometimes make decisions that appear to go against the interests of the state. We saw many of those around 2008 -- admittedly heroic measures taken primarily to fix dumb-ass mistakes. Bitcoin, for all its digital purity, makes such policies impossible to implement, taking away our policy safety net.
Maybe that’s actually a good thing, but I for one am not yet willing to bet on it.
Reprinted with permission
The Innovator’s Dilemma, a 1997 book by Harvard professor Clayton Christensen, made the point that successful companies can lose their way when they pay too much attention to legacy products and not enough attention to new stuff. They are making so much money they either don’t see a competitor rising up or are too complacent to feel threatened. In either case the incumbent generally loses and the upstart (usually one of many) generally wins. The best way for successful companies to avoid this problem is by inventing the future before their competitors do.
We see this pattern over and over in high tech. Remember Lotus? Remember Word Perfect? Remember Borland? And it’s not just in software. Remember IBM sticking too long with the 80286 processor? Remember the Osbourne Executive?
Microsoft certainly faces this dilemma today, having nothing with which to replace Windows and Office. Some say Apple, too, is living now on the wrong side of the innovation curve, but I don’t think so. I think Cupertino has a plan.
When Apple announced its iPhone 5c and 5s mobile phones I alluded to having an idea of some broader strategy Cupertino had in mind for the devices, especially the iPhone 5s. Here are the clues I am working from:
Here’s what I think is happening. At the very moment when Apple critics are writing-off the company as a three- or four- or five-hit wonder, Apple is embracing the fact that desktop computers only represent about 15 percent of its income, making Apple clearly a mobile technology company. As such, it is more important for Apple to expand its mobile offerings than its desktops. So Apple in a sense is about to make the Macintosh deliberately obsolete.
This doesn’t mean Apple is going out of the Mac business. Why would it drop a hardware platform that still delivers industry-leading profit margins? But a growing emphasis from here on out will be the role of iOS on the desktop.
I see the iPhone 5s and whatever follows as logical desktop replacements. They, and phones like them, will be the death of the PC.
Jump forward in time to a year from today. Here’s what I expect we’ll see. Go to your desk at work and, using Bluetooth and AirPlay, the iPhone 5s or 6 in your pocket will automatically link to your keyboard, mouse, and display. Processing and storage will be in your pocket and, to some extent, in the cloud. Your desktop will require only a generic display, keyboard, mouse, and some sort of AirPlay device, possibly an Apple TV that looks a lot like a Google Chromecast.
That’s what I have running in the picture on this page, only with my iPhone 5 and iOS 7. A year from now I expect the apps will detect and fill the larger screen. And that Mac-in-your-pocket will have not only iWork installed, but also Microsoft Office, which Microsoft will be forced to finally release for iOS. Apple making iWork free on new devices -- devices powerful enough for this desktop gambit -- guarantees that Microsoft will comply.
Go home and take your work with you. Go on the road and it is there, too. IT costs will drop for businesses as desktop PCs are replaced. Having a desktop at home will cost in the $200 range, bringing costs for home IT down, too.
Why would Apple do this? Well for one thing if it doesn’t Google will. For that matter Google will, anyway, so Apple has some incentive to get this in the market pronto.
There are other reasons why Apple would do this. For one thing it is much more likely to hurt the PC market than the Mac market, since pocket desktop performance probably won’t be there for Apple’s core graphics and video markets. Mac sales might actually increase as sales are grabbed from faltering Windows vendors.
But in the end it doesn’t really matter to Apple what happens to the Mac since it is a phone company now. And by embracing its phone-i-ness, Apple will be giving its mobile business a huge boost. Want an iPhone desktop? That will require a new phone, probably sooner than you would otherwise have upgraded. If you are thinking of this new phone as your total computing environment, albeit backed-up to the cloud, you’ll be inclined to spend more on that phone, opting for the maximum configuration. Apple makes a higher profit on maxed-out iPhones than on base phones. And instead of upgrading your desktop every 2-3 years, you’ll now be doing it every 1-2 years.
But wait, there’s more! This desktop gambit completely bypasses Wintel. There’s no pro-Windows bias in the phone market. If anything there’s an anti-Windows bias, so Apple will be playing to its strength. This will be a huge blow to Microsoft, Windows, and Office, yet Redmond will lean into it in an attempt to save Office. Either that or die.
This is a chance for Apple to reinvent the desktop exactly as it reinvented the music player, the mobile phone, and the tablet. For those who say Apple can’t do it again, Apple is already doing it again.
Ironically, for all the stories I’ve been reading about the death of the desktop, this strategy I am laying-out guarantees a desktop resurgence of sorts -- only one that won’t help Dell or HP a bit.
Now take this idea one step further. There’s an opportunity here for Apple to promote yet another hardware platform -- a mobile interface to go with that iPhone. This is a device I seriously considered doing myself for Android a couple years ago but the performance just wasn’t yet there.
You see for all the advantages of having a desktop in your pocket, we really prefer larger displays and even keyboards to do actual work. Tablets have their place, but that place is not everywhere. Commodity desktop peripherals are easy to provide at work and home but much more difficult on the road. Use an iPad to give a bigger screen to your iPhone? That doesn’t make sense. So I expect Apple to build for road warriors a new class of devices that have the display, keyboard and trackpad of a notebook but without the CPU, memory or storage. Call it a MacBook Vacuum, because it’s a MacBook Air without the air.
More likely, since it’s an iOS device, Apple will call this gizmo an iSomething. It will be impossibly strong and light -- under a pound -- the battery will last for days, and it ought to cost $199 for 11-inch and $249 for 13-inch, but Apple being Apple will charge $249 and $349.
What I’m predicting, then, is an Apple resurgence. But let’s understand something here: this is yet another product class that Apple will dominate for awhile then eventually lose. It’s a 3-5 year play just like the iPod, iPhone, and iPad. Google and Amazon will be in hot pursuit, each more willing than Apple to pay to play. Cupertino will have yet another dilemma a few years from now and possibly another revolution to foment after this one if it can think of something new. The firm will need it. Still I see happy days ahead for Apple with iOS 7 and the legacy of Steve Jobs preserved for now.
Reprinted with permission
Wednesday at the TechCrunch Disrupt conference in San Francisco, Yahoo CEO Marissa Mayer presented her company’s side of fighting the National Security Agency over requests to have a look-see at the data of Yahoo users. It’s a tough fight, said Mayer, and one that takes place necessarily in private. Mayer was asked why tech companies had not simply decided to tell the public more about what the US surveillance industry was up to. "Releasing classified information is treason and you are incarcerated," she said.
Go directly to jail? No.
How would that work, exactly? Would black helicopters -- silent black helicopters -- land at Yahoo Intergalactic HQ and take Marissa Mayer away in chains? Wouldn’t that defeat the whole secrecy thing to see her being dragged, kicking and screaming, out of the building?
No really, what would happen?
So I asked my lawyer, Claude.
I’ve written about Claude Stern many times and he appeared, playing himself, in Triumph of the Nerds. Claude has been my lawyer since the early 1990s and is today a partner at Quinn Emanuel Urquhart & Sullivan LLP, one of America’s largest law firms devoted solely to litigation (so don’t even think of suing me). Quinn Emanuel is, to put it bluntly, 600 bad-ass lawyers who hate to lose and rarely do. As far as I know Quinn Emanuel does not represent Yahoo.
My inclination, if I were Marissa Mayer, would be to tell the NSA to make my day: "Take me to jail, but understand my company will pay whatever it costs to fight this, we will force it into the open, and -- by the way -- I’m still breast-feeding, so my baby comes too".
Film at 11.
Here’s what the far more level-headed Claude says: "Companies typically do not yield to government interference unless they feel there is something to their advantage. Does anyone really believe that NSA would arrest FaceBook’s CEO if it did not comply with a random, illegal order? Please".
Notice how Claude cleverly replaced Yahoo’s Mayer with Facebook’s Zuckerberg, who is clearly not breast-feeding? But his point is still the same: companies could stand up to these NSA orders and most likely beat them if they chose to.
Either these companies are, as Claude suggests, getting more from the NSA than we are presently aware of, or maybe each CEO is just hoping someone else will be the one to stand up to the bully. I prefer to think it is the latter case. And since I know a lot of CEOs and the way they tend to think, I’d put money on that being the situation.
The CEOs and their companies, then, are either gutless or corrupt. Charming.
The NSA orders are illegal, it’s not treason to reject them, and even if it were technically treason there is still a right to both due process and -- in 21st century corporate America -- to spend whatever it takes to beat the rap.
There’s no way Marissa’s baby would spend even an hour in jail, which is exactly why I wish she’d take a public stand on the issue, this nonsense would go away, and we could get back to solving real problems.
Reprinted with permission
Some readers have asked me for a post on the new Apple iPhones announced two days ago. I’ll get to that in time but prefer to do so when I actually have an iPhone 5S in my hands because I have a very specific column in mind. And no, it’s not the column you think it is. But this is still a good time to write something about Apple in general, which is how Cupertino appears to now stand at a crossroads.
There is a world of difference between Microsoft and Apple but one way they are similar is in facing a generational change. Another way they are similar is in having robust legacy businesses that both put a drag on such change (who needs change, we’re doing great!) and make it easy to wait or at least to go slowly. But no matter how much money they have in the bank, each company must eventually come to terms with how it is going to move forward in an evolving market. Neither company has.
Microsoft is in the tougher position because its core business isn’t growing, but Apple can pretty easily plot its own growth and see where that will eventually peak and tail-off, too. Before then there is still room for plenty of growth as Apple enters new global markets and eventually adds all carriers (China Mobile, for example). In Apple’s case I’m thinking of it right now as a mobile phone company, because that’s, for the most part, what it has become.
Following past trends the challenge for Microsoft is what next to copy, for Apple it’s what next to re-create. Microsoft’s job is easier but is made complicated by the company’s innate need to carry with it the entire Windows ecosystem. Yet there are some places where Windows just doesn’t fit. Apple’s challenge is to both re-create and to do so without Steve Jobs. Not wanting to overstate or understate the value of Steve to Apple, I see his role is one of demanding re-creation, which as anyone who knew Steve will tell you has nothing to do with recreation.
Two years since the death of Steve Jobs, Apple is still trying to get its bearings. Plenty is happening in Cupertino, I’m sure, but how much of that will make it to market? There’s a big numbers problem here: Apple has grown so large that its options for new businesses are limited to those that can add tens of billion in sales. Selling six million Apple TVs in a year is still a hobby to Apple, because that’s only $600 million.
To some extent I think Apple has been treading water or has even been indecisive, but I don’t attribute that to Steve’s passing. I believe it was already happening back when Steve was fully in charge. Secrecy, you see, can hide a lot of things, even nothing when the world wants to believe that something is there.
You may recall I wrote a column a couple years ago about Apple’s huge North Carolina data center. This was just before we moved back to California. In that column I questioned whether anything was even happening in the $1 billion facility since nobody was driving in or out. It seemed overkill to me. Then recently a longtime reader (and you know with me longtime means a long time) told me about meeting a representative of the company that built all the servers in that Apple data center -- all 20,000 of them.
Twenty thousand servers is a lot of servers. It’s certainly enough servers to run iTunes and Apple’s various App Stores as well as Software Update. Twenty thousand servers feels about right for the Apple we know today. But my friends who build data centers tell me that Apple facility in North Carolina could hold a million servers or more. The data center appears to be only five percent populated.
It’s one thing to build with the future in mind, but building a facility twenty times bigger than you need is more than that, it’s a statement, an obfuscation, or maybe it’s a joke.
For a company with a $600 million hobby, why not a $1 billion joke?
Google and Facebook have way more servers than that, so maybe Apple isn’t their competitor at all? And of course Apple isn’t, not directly.
My guess is Apple’s trying to keep its options open. There are sure to be competing forces inside vying to control which industry the company will next re-create. It’s probably fair to assume that decision has not yet been made. It’s also fair to predict that Apple would be smart to get on with it, whatever it is.
When Steve Jobs retired and Tim Cook took over as Apple CEO I predicted there was another shoe to drop. I simply didn’t see Cook, a manufacturing guy, as the one to lead Apple into its next stage of growth. I thought Steve had some grand plan, some big surprise. I even thought I knew who would be Steve’s eventual successor. Only when I wrote that, Steve reached out to me to say that -- "yet again" in his words -- I was wrong. It was the last time I ever heard from him.
I don’t know what’s happening with Apple beyond the obvious burnishing that’s taking place. Look at that iPhone 5S and you’ll see a piece of jewelry -- a luxury item in the tradition of luxury meaning good design and craftsmanship, not just a high price. I can’t wait to get mine. But the iPhone 5S isn’t the future of Apple. That’s yet to be written.
Reprinted with permission
A good friend of mine called Microsoft buying Nokia "two stones clinging together trying to stay afloat". I wouldn’t go that far but I don’t think the prognosis is very good. On the other hand, I’m not sure it has to be good for Microsoft to achieve its goals for the merger. Huh?
This is why you come here, right, for my lateral thinking? I don’t think Nokia has to succeed in order for Microsoft to consider the acquisition a success.
This idea came to me as I was listening to the Jeep radio on my way into town to report my Internet outage. An editor from CNet was being interviewed about the Nokia acquisition on KCBS and he said Microsoft was "betting the company" on the deal. That sounded odd to me because $7.2 billion is a lot of money but Microsoft paid more than that for Skype. It’s something under a year of earnings for a company that has several such years stashed away. So why does this have to be Microsoft betting the company?
It doesn’t. The CNet guy is wrong.
The Nokia acquisition will have no impact on Microsoft’s Windows, Office, MSN, xBox, or server strategies. None of those operations will take a budget hit to pay for Nokia and none of them will be merged into the Nokia operation. So most of Microsoft (certainly all the profitable bits) will go on as though Nokia had never happened.
This doesn’t mean Nokia isn’t important to Microsoft, but it sets some bounds on that importance. It’s possible, for example, that Skype might be folded into Nokia somehow but I’ll be surprised if that happens.
So why, then, did Microsoft buy Nokia? The stated reason is to better compete with Android and iOS, furthering Ballmer’s new devices and services strategy, but I think that game is already lost and this has more to do with finance than phones.
Microsoft, like Apple and a lot of other companies, has a problem with profits trapped overseas where they avoid for awhile US taxation. The big companies have been pushing for a tax holiday or at least a deal of some sort with the IRS but it isn’t happening. So Apple, sitting on $140+ billion has to borrow $17 billion to buy back shares and pay dividends because so much of its cash is tied-up overseas. But not Microsoft, which just bought Nokia -- a foreign company -- with some of its overseas cash. Redmond said so today. That makes the real price of Nokia not $7 billion but more like $4.5 billion, because it’s all pre-tax money.
Not only is Nokia cheaper than it looks, those 32,000 Nokia employees coming over to Microsoft transform the company into a true multinational with all the tax flexibility that implies. Microsoft may never pay US taxes again.
I think a lot of those acquired Nokia employees won’t be around for long, either. Most of them are in manufacturing, remember. Nokia has factories but Microsoft doesn’t. Nor does Microsoft want factories. So over some period of time I’d expect most of those former Nokia factories and former Nokia manufacturing employees to be sold off, primarily to Chinese and Indian acquirers. And the cash generated by those factory sales won’t be counted as profits because at best Microsoft will get rid of them at cost (no capital gain and no capital gain tax). So what does that make Microsoft’s acquisition of Nokia?
Money laundering.
It’s also the acquisition of a global brand and a chance to keep Nokia from jumping into the Android camp.
But what of Stephen Elop, former Nokia CEO and now (again) a Microsoft VP. Is he poised to take Ballmer’s job?
Not by a long shot.
Elop failed at Nokia and he abandoned Microsoft by going to Nokia in the first place so I hardly think Ballmer will hand him the big job. There is a column or two in this question (who should run Microsoft?) and I have some ideas, but let’s leave that for another time and just say for now that it won’t be Stephen Elop.
Microsoft still has a problem in the phone business of course and if Nokia isn’t the answer what is? The more obvious acquisition to me was always Blackberry, not Nokia, and that could still come. Or if Microsoft is really serious about devices let it buy Qualcomm, which Intel was too stupid to buy.
Microsoft may be dumb, but it isn't stupid.
This column, the obvious post on Microsoft buying most of Nokia, is arriving later than I had hoped because we had an Internet failure today at our house on the side of a mountain in Sonoma County near Santa Rosa. We’re 15 minutes from town but the terrain is such that there’s no cellphone service from any carrier, we’re beyond the reach of DSL, there is no cable TV, so our only choices for Internet access are crappy satellite Internet or non-crappy fixed wireless, which we get from an ISP called CDS1.net. That connection is really good since the ISP’s tower in this part of the county is about 200 feet from my office window. It’s a pretty robust operation, too, with backup generators that also support the Sheriff’s radio network. So I was surprised when my Internet connection went down early Tuesday morning and dismayed when I realized that it was a single point of failure: I had no backup ISP and even my phones are all voice-over-IP. In order to report my outage I had to drive to town. There I learned that it was a router failure on the mountain so the prospect of using 200 feet of Cat5 to connect wasn’t an option. That’s why this column was written at Starbucks.
Reprinted with permission
This past weekend I was invited to spend an hour talking about Silicon Valley business with a group of MBA students from Russia. They were on a junket to Palo Alto from the Moscow School of Management Skolkovo. I did my thing, insulting as many people and companies as possible, the students listened politely, and at the end there were a few questions, though not nearly as many as I had hoped for.
If you’ve ever heard one of my presentations the most fun tends to take place during the Q&A. That’s because I can’t know in advance what a group really cares about but in the Q&A they can tell me and sometimes we learn a thing or two. One question really surprised me and inspired this column: "In Silicon Valley," the MBA student asked, "it seems that mentoring is an important part of learning business and getting ahead, yet mentoring is unknown in Russia. How does it work when there is no obvious reward for the mentor? Why do people do it?"
I suspect that this question says far more about Russian business culture than it says about the US, because mentoring is alive and well in places like China and Japan. A lack of mentoring, if true, probably puts Russia at a disadvantage. Disadvantaged and, frankly, clueless.
I have had mentors and I have been a mentor in turn and however squishy there usually is a quid pro quo for the mentor. Maybe he or she is flattered by the attention. Maybe they are just paying it forward. But I’m pretty sure the Russian MBA student was wrong and this mentoring business has a reward structure, just not a standardized or very rich one.
I asked a couple of friends about their mentoring experiences. Avram Miller, who co-founded Intel Capital and is arguably the father of home broadband had this to say:
"I would not have been successful in my career had not a number of people (they were all men) helped me develop my skills and opened doors for me. They were all older than I was. Maybe it was some extension of parenting for them. I in return have done the same for many younger people. I also try to help my friends but in that case, we call it friendship. The most important thing about being a mentor is the sense of impact and accomplishment. I have and do mentor women as well. I don’t think there is anything special about high tech and mentoring. Also some great managers were not great mentors".
Read Avram’s excellent blog at www.twothirdsdone.com.
Richard Miller (no relation to Avram) took a different view. Richard is English and came to the USA as VP of R&D at Atari, reporting to Jack Tramiel, whom I have a hard time thinking of as a mentor, but what do I know? Richard designed the Atari Jaguar game console and many other systems since including leading the team at PortalPlayer (now nVIDIA) that designed the Tegra mobile processors:
"I doubt many business professionals think of themselves as mentors except in some kind of ego-bloated hindsight way," said Richard, I guess handling that inevitable Jack Tramiel question. "They may be good mentors since the methods of mentoring are effective in achieving their goals, but it is rare for a business leader to devote any thought or time on the subject in my experience. In the business space, the act of mentoring is in reality something that the mentee (is that a word?) creates through his or her own study. The most effective learning tool in that environment is access. In the technical fields mentoring is very important and in good companies it is structured, in more of a professor-postgrad kind of setup. I’m sure you’ll find this happening at Intel, Google etc. I see it a lot both here and in Asia (China, Japan, India)".
Though I hadn’t thought of this before, Richard is probably correct that the effects of mentoring are largely guided by the questions of the student and that person’s willingness to learn from the answers. And, thinking further, I can see that in my own career where there are people with whom I’ve spent only a few hours that have, through their insights, changed my life. Or maybe, in Richard’s view, it was through my willingness to ask the right questions and absorb the insights that this happened.
What do you think? Has mentoring had a positive role in your life and career?
Reprinted with permission
Image Credit: Filipe Matos Frazao/Shutterstock
So Steve Ballmer is leaving Microsoft a year from now: what kind of schedule is that? It’s one thing, I suppose, for a company to point out that it has a retirement policy or a succession plan, or even to just give the universe of potential Microsoft CEOs a heads-up that the job is coming open, but I don’t think that’s what this is about at all. It’s about the stock.
Like in baseball, when all else fails to get the team out of a slump, fire the manager. And sure enough, Microsoft shares are up eight percent as I write. Ballmer himself is $1 billion richer than he was yesterday. I wonder if he had cleaned out his desk this afternoon whether it would have been $2 billion?
You’ll read a lot of stories today and tomorrow about how Ballmer as CEO missed big product trends like smart phones and tablets -- the very trends that Steve Jobs and Apple did so well. But that’s not so. Windows CE phones existed long before the iPhone. Windows tablets predated the iPad by more than a decade and date from the pen-based computing fiasco of the early 1990s. So it’s not that Microsoft missed these opportunities -- it just blew them. Windows CE sucked and Windows for Pen Computing was close to useless.
Apple was successful in these niches mainly because it did a more thoughtful job of them at a time when hardware was finally coming available with enough power at the right price to get the job done. Earlier simply wasn’t an option.
Microsoft has always been good at embracing enormous opportunities -- opportunities big enough to drive a truckload of money through -- but hasn’t been very good at the small stuff. Microsoft saw that IBM-compatible PCs were going to be a huge business and so it bought an operating system to be a part of that. Microsoft saw that network computing was going to be big so it "borrowed" some tech from 3Com. Microsoft saw that graphical computing would be the next trend so it licensed Windows 1 from Apple. Microsoft recognized almost too late that the Internet was going to be huge so it started giving away Internet Explorer. All of these were simply doors that needed to be walked through into new rooms filled with money. The most truly innovative move that Microsoft may have made as a business was simply bundling most of its apps together into Office and using that to destroy the rest of the PC software industry -- now that was smart.
But smart phones and tablets, those were tactical moves, not strategies, and the revenue potential was never there to get those efforts the top talent it would have required to succeed if the hardware had been ready to support it.
It’s good that Ballmer is moving on and I’ll be intrigued to see what he does with his money. As for Microsoft, the future there is even more uncertain. There’s lots of money still to be made, of course, but the PC era is coming to a close and Redmond appears not to even be a player in whatever this new era comes to be called. That has to be tough for a fighter like Ballmer. I’d be eager to move on, too, if I were him.
Reprinted with permission
Fifty-two years ago, three days before he left office and retired from Washington, U.S. President Dwight D. Eisenhower addressed the nation on television with what he called "a message of leave-taking and farewell, and to share a few final thoughts…"
This came to be called Eisenhower’s military-industrial complex speech and was unlike any other address by Eisenhower or, indeed, by any of his predecessors. You can read the entire speech (it isn’t very long) here, or even watch it here, but I’ve also included below what I believe to be the most important passage:
Until the latest of our world conflicts, the United States had no armaments industry. American makers of plowshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defense; we have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defense establishment. We annually spend on military security more than the net income of all United States corporations.
This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence -- economic, political, even spiritual -- is felt in every city, every State house, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.
In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.
We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.
The speech was an extraordinary warning from an old general about the very dangers presented by an overzealous military machine -- a warning as valid today as it was back then. Maybe more so.
Absolutely more so.
This is the third and last in my series of columns on data security, this time looking forward and suggesting a note of caution. Just as Eisenhower predicted back in 1961, we stand today at the beginning of a whole new military industrial complex, one that promises to be bigger and more pervasive than any that came before. I call it the cyber trough because it’s where so many pigs will soon be feasting.
Wars are powerful tools for economic development. America emerged from World War One a superpower where it had not been one before. World War Two ended a great depression and solidified America’s role as a superpower for another half century. When Eisenhower spoke, the bogeyman was Soviet Russia compelling us to spend $1 trillion on nuclear arms and their delivery at a time when we didn’t even know as a nation what $1 trillion was called, much less how we’d raise it.
Just as Eisenhower predicted, there were a series of events, each engendering a new kind of fear that could only be salved by more arms spending. I’m not saying all this spending was wrong but I am saying it was all driven by fear -- fear of mutually assured destruction, of Chinese domination, of Vietnam’s fall, of the USSR’s greater determination, of the very fragility of our energy supplies, etc.
Each time there was a new fear and a new reason to spend money to defeat some opponent, real or imagined.
Then came 9-11, al-qaeda, and its suicide bombers. I had met Carlos, the Jackal -- the 1970s poster boy for terrorism -- and he wouldn’t have killed himself for any cause. This suicide stuff was new and different and so we took it very seriously, spending another $1 trillion (or was it $2 trillion?) to defeat shoe bombers and underwear burners and any number of other kooks.
Al qaeda changed the game, bringing the action home to America for the first time since Lincoln.
But now the trends that created the Department of Homeland Security and the Patriot Act have somewhat run their courses. Bin Laden is dead and if the military industrial complex is to endure and even thrive in an era of sequestration, we’ll need a whole new class of threats against which to respond and throw money.
Cyber Threats
Washington is gearing-up for cyber warfare. We will conduct it against other nations and they will conduct it against us. As always nations will be served by corporate surrogates and vice versa. But where in the past there was always an identified enemy and a threat that could be clearly targeted, the nature of cyber warfare is that small nations can be as dangerous as large ones and inspired freelancers can be the most dangerous of all.
And to the delight of the military industrial complex, this is one threat -- one source of national fear -- that is unlikely to go away… ever.
Here’s how a friend of mine who operates inside the Washington Beltway sees it: "The big move here in DC -- is to standardize cyber threats as a fact of life. Total integration of cyber fear into the fabric of the economy -- cyber-insurance, etc. The building of the next cyber bubble is well under way. All of the beltway companies are embracing the opportunity with amazing haste. It is the ultimate business case for profitability and the profits will be astronomical."
Here’s the genius in this new threat: every country, every company, every technically smart individual can be seen as presenting a cyber threat. They’ll do it for power, money, patriotism, religion -- the reasons are as varied as the ethnicities of the practitioners.
But this time you see we can never win, nor can we even intend to. Cyber war will last forever. The threats will evolve, the enemies will be too numerous, and because we’ll be doing it too as a nation, there’s also the prospect of simple revenge and retribution.
The threat of cyber warfare will drive defense and intelligence spending for the next half-century. It will never be conquered, nor do the warriors really even want that to happen since their livelihoods would go away.
In a paralyzed Congress, cyber warfare will soon be the only true bipartisan cause even though it shouldn’t be a cause at all.
And this is what makes Edward Snowden so important. He’s at best a minor player revealing nothing so far that wasn’t already common knowledge, yet he’s being treated like a threat to global security.
If the intelligence community didn’t have Edward Snowden they would have had to invent him, his value as a catalyst for future government spending is so great.
If that crack about Snowden sounds familiar, remember it’s what Voltaire said of God.
Voltaire, God, and now Edward Snowden: Eisenhower was right! Prepare to be taken advantage of… again.
Reprinted with permission
This week we have the DefCon 20 and Black Hat computer security conferences in Las Vegas -- reasons enough for me to do 2-3 columns about computer security. These columns will be heading in a direction I don’t think you expect, but first please indulge my look back at the origin of these two conferences, which were started by the same guy, Jeff Moss, known 20 years ago as The Dark Tangent. Computer criminals and vigilantes today topple companies and governments, but 20 years ago it was just kids, or seemed to be. I should know, because I was there -- the only reporter to attend Def Con 1.
In those days there were no independent computer security research organizations. There were hackers, or more appropriately crackers, as they were known.
Def Con (notice the different spelling) was a computer criminal’s rave where -- for reasons I could never quite understand -- the cops were invited to attend. The Dark Tangent can now legally drink at his own show (he couldn’t 20 years ago), he picked up a real name along the way and even an MBA, so of course the show is now supposed to make money. They still play Spot the Fed, with the person who spots the Fed getting a t-shirt that says, "I spotted the Fed", and the Fed who has been outed receiving a shirt that says, "I am a Fed". It’s cute, but no longer clever.
Def Con 1 attracted around 150 hackers and crackers to the old Sands Hotel back before ConAir Flight 1 smashed the hotel to bits for a movie. The year was 1993 and InfoWorld, where I worked in those days, wouldn’t pay my way, so I went on my own.
It was surreal. I knew I wasn’t in Kansas anymore when my cellphone rang in a session, setting-off four illegal scanners in the same room. As I left to take my call in the hallway I wondered why I bothered?
There were two high points for me at Def Con 1. First was the appearance of Dan Farmer, then head of data security for Sun Microsystems. Dressed all in black leather with flaming shoulder-length red hair and a groupie on each arm, Dan sat literally making-out in the back row until it was time for his presentation. But that presentation was far more entertaining than the smooching. In a series of rapid-fire slides Farmer showed dozens of ways in which crackers had attacked Sun’s network. He explained techniques that had failed at Sun but would probably have succeeded at most other companies. It was a master class in computer crime and his point, other than to prove that Dan was the smartest guy in the room, was to urge the crackers to at least be more original in their attacks!
But the best part of Def Con 1 was the battle between the kids and hotel security. Contrary to popular belief, breaking into Pentagon computer systems was not very lucrative back then, so many of the participants in that early Def Con did not have money for hotel rooms. The Dark Tangent handled this by renting the single large meeting room 24 hours per day so it could be used after hours for sleeping. Alas, someone forgot to explain this to the 6AM security shift at the Sands. Just as the hardy group of adventurers returned from a late-night break-in at the local telephone company substation, fresh security goons closed the meeting room and threw the kids out.
It is not a good idea to annoy a computer cracker, but it is a very bad idea to annoy a group of computer crackers bent on impressing each other.
The meeting reconvened at 9 or 10 with the topic suddenly changed to Revenge on the Sands. Gail Thackeray, then a US Attorney from Arizona who at that moment had approximately half the room under indictment, rose to offer her services representing the kids against the hotel management.
Thackeray had been invited to speak by the very people she wanted to put in jail. I told you this was surreal.
Adult assistance might be nice, but a potentially more satisfying alternative was offered by a group that had breached the hotel phone system, gained access to the computer network, obtained root level access to the VAX minicomputer that ran the Sands casino, and were ready at any moment to shut the sucker down. It came to a vote: accept Thackeray’s offer of assistance or shut down the casino.
There was no real contest: they voted to nuke the casino. Not one to be a party pooper, I voted with the majority.
Gail Thackeray, feeling her lawyer’s oats, was perfectly willing to be a party pooper, though. She explained with remarkable patience that opting en masse to commit a felony was a move that we might just want to reconsider, especially given the three strikes implications for some of the older participants.
We could accept her help or accept a date with the FBI that afternoon. The Sands (now the Venetian), which was ironically owned by the same folks who used to run Comdex, never knew how close it came to being dark.
It was a thrilling moment like you’d never see today. Everyone who was in that room shares a pirates’ bond. And though I can’t defend what we almost did, I don’t regret it.
And like the others, I wish Gail Thackeray had stayed in Arizona and we’d shut the sucker down.
Reprinted with permission
Photo credit: Adchariyaphoto/Shutterstock
Yesterday Google announced a product called Chromecast -- a $35 HDMI dongle that’s essentially YouTube’s answer to Apple TV. While the event was more Googlish than Applesque (the venue was too small, the screens were too small, the presenters weren’t polished, and as a result the laughs and applause didn’t come) the product itself was astonishing -- or appeared to be.
The press picked-up on the most obvious headline item in the announcement -- the $35 selling price which drops to $11 if you factor in three months of free Netflix per dongle even for existing Netflix customers (now sadly dropped). That’s like Google attaching an $8 bill to every Chromecast -- something Apple would never do.
But the press -- even the so-called technical press -- seemed to miss some attributes of the product that were right up there in the "then a miracle happens" category. Let me list what stood out for me:
I don’t doubt that Chromecast can do number one. Though HDMI varies from TV to TV there is a base feature set required to even put the HDMI sticker on the set so this one makes sense to me. It’s the combination of 1 with 2 and/or 3 that had me scratching my head.
Very few TVs accept remote control commands through HDMI and even those that do aren’t compatible with each other, so turning on the TV or changing the HDMI input as shown were unlikely to work exactly as portrayed across a broad sample of TVs.
So what can this thing actually do?
This morning I asked around and found a couple folks in the consumer electronics space who either knew about the product directly (one guy) or knew the product space even more intimately (another guy). Here’s what they agreed on: in order for Chromecast to work with all HDMI-equipped TVs and do what Google demonstrated, the TV had to be already turned on with the display asleep and the Chromecast had to be the only HDMI device powered-up or (more likely) the only HDMI device attached to the TV.
In other words, while the product is still important and ground-breaking in many ways, at least some of the more exciting features appear to have been… demoware.
Most HDMI-equipped TVs that have a sleep mode will come to life if they detect a signal on any HDMI port. So the Chromecast isn’t likely to have actually turned on the TV as much as turned on the display.
Some (notice not most) HDMI-equipped TVs will switch the active input to, well, the input that is active. But this depends to some extent on the behavior of the other HDMI inputs and devices. The Cringely Boys have a newish 60-inch LED TV from Sharp with four HDMI inputs assigned variously to a satellite receiver (we live on a mountain remember), an xBox, a Roku and an Apple TV (sharing a smart HDMI splitter), and a PC. On this TV at least, with all of those devices attached and in their various sleep modes, I don’t think just using the remote to bring a single device to life (hitting the Apple TV menu button, for example) would actually switch the active HDMI input. At least it doesn’t for me, and this Sharp is a pretty representative higher-end TV.
So Chromecast is amazing, especially for its price point. It’s the functional equivalent of an Apple TV for a lot less money. If Google can come up with an equally radical content strategy this could be a game-changer for television (more on that in a future column), but looking closer the product simply isn’t as cool as Google presented it to be.
That makes no sense to me. Google could have been more honest about the features while improving the venue and the crowd to build buzz and get people watching the event later online. We could all be excited about the actual product -- thanks to showmanship -- not the idea of the product.
Or maybe I’m the one who’s full of shit and Google has created a miracle.
Reprinted with permission
Mark Surich was looking for a lawyer with Croatian connections to help with a family matter back in the old country. He Googled some candidate lawyers and in one search came up with this federal indictment. It makes very interesting reading and shows one way H-1B visa fraud can be conducted.
The lawyer under indictment is Marijan Cvjeticanin. Please understand that this is just an indictment, not a conviction. I’m not saying this guy is guilty of anything. My point here is to describe the crime of which he is accused, which I find very interesting. He could be innocent for all I know, but the crime, itself, is I think fairly common and worth understanding.
The gist of the crime has two parts. First Mr. Cvjeticanin’s law firm reportedly represented technology companies seeking IT job candidates and he is accused of having run on the side an advertising agency that placed employment ads for those companies. That could appear to be a conflict of interest, or at least did to the DoJ.
But then there’s the other part, in which most of the ads -- mainly in Computerworld -- seem never to have been placed at all!
Client companies paid hundreds of thousands of dollars for employment ads in Computerworld that never even ran!
The contention of the DoJ in this indictment appears to be that Mr. Cvjeticanin was defrauding companies seeking to hire IT personnel, yet for all those hundreds of ads -- ads that for the most part never ran and therefore could never yield job applications -- nobody complained!
The deeper question here is whether they paid for the ads or just for documentation that they had paid for the ads?
This is alleged H-1B visa fraud, remember. In order to hire an H-1B worker in place of a U.S. citizen or green card holder, the hiring company must show that there is no "minimally qualified" citizen or green card holder to take the job. Recruiting such minimally qualified candidates is generally done through advertising: if nobody responds to the ad then there must not be any minimally qualified candidates.
It helps, of course, if nobody actually sees the ads -- in this case reportedly hundreds of them.
When Mr. Cvjeticanin was confronted with his alleged fraudulent behavior, his defense (according to the indictment) was, "So let them litigate, I’ll show everyone how bogus their immigration applications really are". Nice.
If we follow the logic here it suggests that his belief is that the client companies’ probable H-1B fraud is so much worse than the shenanigans Mr. Cvjeticanin is accused of that those companies won’t dare assist Homeland Security or the DoJ in this case. Who am I to say he’s wrong in that?
Employers are posting jobs that don’t really exist, seeking candidates they don’t want, and paying for bogus non-ads to show there’s an IT labor shortage in America. Except of course there isn’t an IT labor shortage.
My old boss Pat McGovern, who owns Computerworld, should be really pissed. Pat hates to lose money.
Reprinted with permission
Photo Credit: Ahturner/Shutterstock
In case you missed it, the Rambling Wrecks of Georgia Tech will next year begin offering an online masters degree in computer science for a total price of just under $7,000 -- about 80 percent less than the current in-state tuition for an equivalent campus-based program. The degree program, offered in cooperation with AT&T and courseware company Udacity, will cost the same no matter where the students live, though two thirds are expected to live and work outside the USA. Time to complete the degree will vary but Georgia Tech thinks most students will require about three years to finish. The program is inspired, we’re told, by the current hiring crisis for computer science grads -- a crisis that anyone who reads this column knows does not exist.
Programmers in Bangalore will soon boast Georgia Tech degrees without even having a passport.
There are plenty of online courses available from prestigious universities like MIT and Harvard -- most of them free. There are plenty of online degree programs, too -- most of them not free and in fact not even discounted. So this Georgia Tech program, made possible by a $2 million grant from AT&T, is something else. It could be the future of technical education. It could be the beginning of the end for elite U.S university programs. Or it might well be both.
The online classes are all free, by the way, it’s just the degree that costs money.
This technical capability has been around for several years but no prestigious U.S. university has made the jump before now because it’s too scary. Georgia Tech is launching its program, I believe, to gain first-mover advantage in this new industry, which I suppose is education, maybe training, but more properly something more like brand sharing or status conferral.
I predict the program will be good for Georgia Tech but not especially good for the people of Georgia. I further predict it will be not very good for U.S. higher education and will hurt the U.S. technical job market. Still, I don’t blame Georgia Tech for this audacious move. Somebody was going to do it, why not them?
Here are some problems with the plan as I understand it. Georgia Tech will start with 300 students, many of them AT&T employees, but hopes to expand the program to as many as 10,000 students -- about 40 times larger than the University’s current CS student population. This will require, according to their plan, eight additional university instructors to serve those 10,000 bodies.
Huh?
I’m all for efficiency and most universities are anything but, however these numbers boggle my old mind. To make them work, in fact, requires a complete rethinking of the graduate education experience.
Ironically, looking at some of the communication that went into making this program, I’d say Georgia Tech sees it about the same way I do: it’s all about the money and not much about the education.
Let me explain. Georgia Tech is a major research university. In big research universities research and publishing count for everything and teaching counts for close to nothing, which is why there are so many bad teachers with endowed professorships. But research universities also often have professional schools like those that teach law, medicine, or business. A key distinction between research programs and professional programs is that most graduate research students have fellowships or assistantships of some sort. In other words they go to school for free in exchange for their labor teaching or researching. Professional schools, on the other hand, expect their students to pay their own way. There may be a few scholarships for MBA students, for example, but most get by on loans they’ll be paying back for decades to come.
Research grad students are slave labor while professional grad students are cash cows for their institutions and matter mostly for the money they can pay.
Computer science is a research field but this new degree at Georgia Tech is specifically branded as being a professional degree. While that sounds extra-important what it really means is the students won’t matter at all to the University, which sees them strictly as cash flow -- up to $18 million per year according to the business plan.
So the logical questions one might ask about such a program (How do you maintain quality? What’s the impact on research? How can you serve 40 times as many students with only eight extra teachers?) might produce surprising answers.
You don’t maintain quality, nor do you intend to because this is, after all, a program that requires no standardized entrance testing of its applicants. Garbage-in, garbage-out.
The impact on research will be nada zilch, zero, because the new online students won’t ever directly interact with research teachers or even research graduate students, nor will they ever get a chance to benefit from such interaction. Out of sight, out of mind.
You can only serve 40 times as many students with eight extra heads by not serving them. Any hand-holding will be pushed off on Udacity or -- to a very limited extent -- to the 4000 Pearson locations where proctored tests will be given for the degree.
So what kind of Georgia Tech degree is this, anyway?
A very crappy one.
Striving for first-mover advantage and $18 million, Georgia Tech will ultimately sacrifice its brand reputation.
Good luck with that.
This is not to say that there aren’t plenty of ways to get costs out of computer science education. I was in Paris two weeks ago, for example, and had lunch with Xavier Neil, France’s second-largest ISP and fourth-largest mobile carrier, both called Free. Xavier, a self-taught programmer, is starting this fall in France a programming school that costs nothing at all to attend. He says the school is his effort to save France’s technical industries and I believe him.
While $7,000 is cheap for a CS degree, free is even better.
Who will step up to start a similar school in America?
Reprinted with permission
Photo Credit: Andresr/Shutterstock
What are the differences between Edward Snowden, the NSA whistleblower, and Daniel Ellsberg, who released the Pentagon Papers back in 1971?
Not much, really, but the distinctions that do exist are key:
Clearly Daniel Ellsberg was a lot classier in his day than Edward Snowden is today. And that’s a big part of the problem, both with this story and the current intelligence fiasco that Snowden describes: too many cowboys.
9/11, as I wrote at the time, opened the national checkbook to over-react and over-spend on intelligence. As a result what we as a nation are doing is recording every piece of data we can get. We say we are doing so to detect and prevent terrorist acts but more properly our agencies expect to use the data for post-event analysis -- going back and figuring out what happened just as law enforcement did after the Boston Marathon blasts.
The biggest problem with these programs (there are many) is that we inevitably play fast-and-loose with the data, which is exactly one of the tidbits dropped by Edward Snowden. Feds and fed contractors are every day looking at things like their own lovers and celebrities they know they aren’t supposed to check on, but what the heck? And the FISA Court? It can’t take action against something it knows nothing about.
When I started working on this column my idea was to look at Snowden from a Human Resources perspective. If government and contractor HR were better, for example, Snowden would never have been hired or he would have been better indoctrinated and never squealed. Snowden is an HR nightmare.
But having talked to a couple really good HR people, I think the Snowden problem goes far beyond better filtering and training to an underlying paradox that I’m sure bedevils every administration, each one suffering more than the one before it as technology further infiltrates our lives.
Nothing is as it seems, you see, so every innocent (and that’s where we all begin) is inevitably disappointed and then corrupted by the realities of public service.
President Obama campaigned in 2008 as an outsider who was going to change things but quickly became an insider who didn’t change all that much, presumably because he came to see the nuances and shades of gray where on the campaign trail things had seemed so black and white. But when that shift happened from black-and-white to gray, someone forgot to send a memo to the Edward Snowdens, who were expected to just follow orders and comply. But this is a generation that doesn’t like to follow orders and comply.
Sitting as he did on the periphery of empire, Snowden and his concerns were not only ignored, they were unknown, and for an intelligence agency to not even know it had an employee ready to blow is especially damning.
So what happens now? The story devolves into soap opera as Snowden seeks refuge as in a Faulkner novel. Maybe he releases a further bombshell or two, maybe he doesn’t. But the circumstances strongly suggest that we’ll see more Edward Snowdens in the future, because little or nothing seems to be happening to fix the underlying problems.
Those problems, in addition to there being too many cowboys, are that all the incentives in place only make things worse, not better. Snowden was a contractor, for example. Why not a government employee? Because government salary limits didn’t allowed hiring six-figure GED’s like Snowden. So do we bring it all in-house? Impossible for this same reason unless we redefine being a public servant to something more like the Greek model where government employees made significantly more money than their private sector equivalents.
How well would that go over right now with Congress?
We could give all the work to the military. The NSA, after all, is a military organization. That would be adding even more cowboys and dubious on Constitutional grounds, too.
The most likely answer here is that nothing will really happen, nothing will change, which means we’ll have more Edward Snowdens down the road and more nasty revelations about our government. And I take some solace in this, because such dysfunctional behavior acts as a check and balance on our government’s paranoia and over-ambition until that pendulum begins to shift in the other direction.
No matter what happens with Edward Snowden or what further information he reveals, there is far more yet to come.
Reprinted with permission
To most people who recognize his name Doug Engelbart was the inventor of the computer mouse but he was much, much more than that. In addition to the mouse and the accompanying chord keyboard, Doug invented computer time sharing, network computing, graphical computing, the graphical user interface and (with apologies to Ted Nelson) hypertext links. And he invented all these things -- if by inventing we mean envisioning how they would work and work together to create the computing environments we know today -- while driving to work one day in 1950.
Doug had a vision of modern computing back in the day when many computers were still mechanical and user interfaces did not even exist. He saw in a flash not only the way we do things today but also the long list of tasks that had to be completed to get from there to here. Now that's vision.
Doug recognized immediately that to even describe his vision to computer scientists of the time would be to invite ridicule. He laid it all out for a colleague once and was advised to keep the whole idea under his hat, it was so crazy.
So Doug spent decades in the trenches building devices and writing software he knew to be already obsolete. But that's what you do when you have a wife and young daughters and want to get to that point in life where you can actually realize the dream.
When that finally happened in the late 1960s at the Stanford Research Institute, Doug had both his most sublime moment of public recognition and his greatest disappointment. The recognition came in December, 1968 in a live 90-minute demonstration at the Joint Computer Conference held that year in San Francisco. Here is a link to that demo.
The impact of this mother of all demos can't be over-estimated. Engelbart and his team showed graphical and networked computing, the mouse, screen icons, dynamically linked files and hypertext to an audience of 1000 computer scientists who until that morning thought computing meant punched cards. The result of six years of work, the demo was equivalent to dropping-in on a model rocketry meeting and bringing with you a prototype warp drive. The world of computing was stunned.
But then nothing happened.
Doug got accolades, but he expected research contracts and those didn't immediately appear. He was not only still too far ahead of his time, he naively failed to realize that vested powers in the research community would keep money flowing toward now obsolete research. When the research really got going again much of it was not at SRI but at XEROX PARC, where many of Doug's team members landed in the early 1970s.
If Apple stole technology from XEROX, then XEROX in turn stole from Doug and SRI.
Being a visionary takes both patience and determination and Doug made it clear he thought the world needed much more from him than just the mouse. He was a huge proponent of the chord keyboard, for example. And when the personal computer took off Doug didn't cheer because he thought the power of a timesharing minicomputer was required. And he was right, because today's PC has the power of a timesharing minicomputer. Doug was delighted when he realized that his Augment system could run faster on a local PC emulator than on the Big Iron at SRI.
I met Doug Engelbart in the late 1970s, introduced by my friend Kirk, who worked in Doug’s lab. We spent a day with Doug for Triumph of the Nerds in 1995. I did a noteworthy NerdTV interview with Doug back in 2005. And just last year for Computer History Day Mary Alyce and I took our boys to Doug's house for a very pleasant afternoon. There has never been a nicer man, at least to me.
I once asked Doug what he'd want if he could have anything. "I'd like to be younger", he said. "When I was younger I could get so much more done. But I wouldn't want to be any less than 50. That would be ideal".
Reprinted with permission
Photo Credit: alphaspirit/Shutterstock
A month ago I began hearing about impending layoffs at IBM, but what could I say beyond "layoffs are coming?" This time my first clues came not from American IBMers but from those working for Big Blue abroad. Big layoffs were coming, they feared, following an earnings shortfall that caused panic in Armonk with the prospect that IBM might after all miss its long-stated earnings target for 2015. Well the layoffs began hitting a couple weeks ago just before I went into an involuntary technical shutdown trying to move this rag from one host to another. So I, who like to be the first to break these stories, have to in this case write the second day lede: what does it all mean?
It means the IBM that many of us knew in the past is gone and the IBM of today has management that is, frankly, insane.
Tough talk, I know, but I’ll offer up right here and now a very public experiment to prove what I mean.
I call them like I see them and always have. That’s my reputation. Ask Steve Ballmer at Microsoft if he likes my work and he may very well say "no." Ask Larry Ellison. Ask Larry Page. You can’t ask Steve Jobs but you can ask Tim Cook. Do they like my work? No. no, and no. Now ask if they respect my work and every one of those men will probably say "yes." Because I call them like I see them and always have.
So you are about to read here a very negative column about IBM saying things you may or may not already know but it generally comes down to the idea that those folks in Armonk are frigging crazy and are doing both their company, the nation -- maybe the world -- an incredible disservice.
Here comes the experiment. Look at the comments at the bottom of this column. There will shortly be dozens, possible even hundreds, of them. See how many of those comments are from happy IBM customers. If IBM is doing a good job then IBM customers will speak up to support their vendor and tell me I am full of shit. Customer satisfaction is all that matters here, because at the end of the day companies live and die not by their quarterly earnings-per-share but by their ability to please customers. And I’m quite willing to predict that the number of comments from happy IBM customers below will be close to zero, because IBM is a mess, customers are pissed, and management doesn’t seem to care.
Feel free to go check the comments right now and I’ll wait for you to come back……..
See?
Let’s look at IBM’s recent financial numbers, which you can find in several places (I prefer Yahoo Finance, myself). IBM’s Global Technology Services brings in the most revenue, 38.5 percent and 29.4 percent of the profit. Only the software group is bringing in more actual profit. IBM’s big money maker is struggling. Revenue is slipping and profits are being maintained by cost cutting. Cost cutting is hurting the quality of service and that is contributing to the decline in overall business. There is no rocket science to this.
Internally IBM is amazingly secretive. Employees are rarely told anything of substance. This includes business plans. For the most part the rank and file of IBM do not know anything about the company’s business plans. What is Ginni Rometti (she’s the chairman, president and CEO of IBM) doing? What is her plan? Most of IBM simply does not know. Workers are given the company line, but none of the company substance.
Take, for example, these current layoffs: how many people have recently lost their jobs at IBM? Nobody outside the company actually knows. The even more surprising truth is that almost nobody inside the company knows, either. I’ve heard numbers from 3,000 all the way to 8,000 current layoffs and you know I believe all of them because there are so many different ways to carve up this elephant.
Global IBM employment is clearly dropping but employment in India, for example, is rising, so is this a net global number or gross layoffs? Nobody knows. What we do know is that layoffs are happening in nations where IBM salaries are higher than average -- Australia, Canada and the USA -- yet where regulations more easily allow such cuts. No jobs are being lost at IBM France, as far as I’ve heard, because there would be no associated financial savings in that socialist system.
Why won’t IBM reveal these numbers? They are numbers, after all, that might actually cheer Wall Street, where cost-cutting is nearly always good news. I could only speculate why and this is one time where maybe I’ll leave that to you in your comments, below. Why do you think IBM no longer reveals its employment numbers?
What has become evident to me about these particular layoffs is that they are extreme. Good, hard-working, useful employees are being let go and their work transferred either to other local team members who are already overworked or to teams in India. Customers are rightly growing wary of IBM India.
I’ve heard from some IBM customers who say they have been bending Ginni’s ear about IBM screw-ups. So she started an initiative to improve the "customer experience". Alas, from those who have been touched by it so far this initiative appears to be all marketing and fluff with no substance behind it. The things that are upsetting IBM customers are not really getting fixed, the company is just telling customers that things are being fixed. The truth is that in foreground IBM has more people telling the customer what they want to hear and in the background IBM has more people yelling at the support teams to do a better job.
But telling and yelling are not the best path -- or even a path at all -- to a better customer experience.
Internally the story is that Ginni expects each division to make its numbers. No excuses. Failure is not an option. That explains these new layoffs. Each division is looking at its budget and is in the process of cutting itself back to prosperity. We can probably expect IBM to do something like this every quarter from now on.
IBM executives are fixated on the 2015 plan. That plan is only about increasing shareholder value, AKA the stock price. Ask Warren Buffett, if you have him on speed dial, if this kind of thinking even qualifies as a business plan. He’ll tell you it doesn’t.
Here’s what’s most likely coming for IBM. As each quarter rolls by it will become more obvious to Wall Street that IBM’s business is flat and/or declining. IBM may make its income and profit goals each quarter, but revenue will continue to going down. The only thing that will change this is if the dollar drops dramatically -- an effect that has helped Big Blue before. But if the dollar stays about where it is, perception is an important part of any stock price and when a business is flat or declining, Wall Street does not like that. Regardless of how many jobs IBM cuts, then, the stock price will eventually go down. IBM can make all its income and profit goals yet the stock price will still drag down shareholder value.
What happens then? More share buy-backs, the sale of complete business units, and then, well then I don’t know what, because I see no end to this trend with current management. Maybe that’s it: IBM management will change, new management will blame everything on old management, and they’ll try to reset the clock. But it probably still won’t work because by then both worker- and customer loyalty will be gone completely.
Making its numbers is IBM’s only priority right now. IBM will push its customers to the breaking point and will abuse its employees to achieve this goal. IBM does not care who it hurts. The IBM that used to be the leader in social reform and good corporate citizenship no longer exists.
Where are the customers in this? In IBM’s big plans its customers are a necessary evil. When you look at the poor quality of service IBM is providing it is very clear IBM does not value its customers. Making the 2015 plan is the only priority and IBM is willing to compromise its service to customers and abuse its workforce to get there.
No IBM customer is asking the company to put fewer workers on their account.
Sometime, perhaps soon, CEO’s and CIO’s will begin to openly discuss their IBM experiences in Wall Street circles. Perception is a powerful force and it will eventually take a big bite out of IBM.Only then, when it’s already close to over, will the general press, the public, and our government even notice what’s happened.
At some point IBM will realize its 2015 plan has already failed (remember you read it here first).IBM’s stock price will drop… a lot. When the price is low enough it will force the company to change how they run the business. At that point they may actually go back to doing things right, and IBM’s value might improve again.
Frankly, by then it will probably be too late.
I wish I was wrong about this.
Reprinted with permission
Photo Credit: Korionov/Shutterstock
I was with a friend recently who has a pretty exciting Internet startup company. He has raised some money and might raise more, his product is in beta and it’s good. It solves a difficult technical problem many companies are struggling with. We argued a little over the name of the product. Of course I thought my suggested name was better or certainly cleverer, but then he said, “It doesn’t matter because we’ll probably sell the company before the product ever ships. It may never appear at all.”
His company will exit almost before it enters. This is happening a lot lately and we generally think it is a good thing but it’s not.
If, like me, you spend a lot of time around startups you know that one of the standard questions asked of founders is “what’s your exit strategy?” An exit is a so-called liquidity event -- a transaction of some sort that turns company equity into spendable cash often making someone (hopefully the founders) rich enough for their children to have something to fight over.
Typical exits are Initial Public Offerings of shares or acquisitions, one company being bought by another. But this whole scenario isn’t exactly as it appears, because the person typically asking the exit question is an investor or a prospective investor and what he or she really wants to know is “what’s my exit strategy?”
How are you going to make me rich if I choose to invest in your company?
Were it not for demanding investors the exit question would be asked less often because it isn’t even an issue with many company founders who are already doing what they like and presumably making a good living at it.
The Lifers
What’s Larry Ellison‘s exit strategy?
Larry doesn’t have one.
Neither did Steve Jobs, Gordon Moore, Bob Noyce, Bill Hewlett, Dave Packard, or a thousand other company founders whose names don’t happen to be household words.
What’s Michael Dell’s exit strategy? Dell, who is trying to take his namesake company private, to de-exit, wants to climb back inside his corporate womb.
There was a time not long ago when exits happened primarily to appease early investors. The company would go public, money would change hands, but the same people who founded the company would still be running it. That’s how most of the name Silicon Valley firms came to be.
Marc Benioff of Salesforce.com has no exit strategy. Neither does Reed Hastings of NetFlix. You know Jeff Bezos at Amazon.com has no exit strategy.
But what about Jack Dorsey of Twitter or even Mark Zuckerberg of Facebook? I wonder about those companies. They just don’t have a sense of permanence to me.
And what about Bill Gates? In Accidental Empires I wrote that Gates wasn’t going anywhere, that running Microsoft was his life’s work. Yet he’s given up his corporate positions and moved on to philanthropy for the most part, despite this week’s effort to shore-up his fading fortune by claiming that iPad users are “frustrated.”
Yeah, right.
Bill Gates didn’t have an exit strategy until running Microsoft stopped being fun, so he found an exit. And I think the same can be said for any of these name founders, that they wanted to stay on the job as long as it remained fun.
Paradigm Pushed
But the new paradigm -- the Instagram paradigm (zero to a billion in 12 months or less) -- is different. This paradigm says that speed is everything and there is no permanence in business. It’s a paradigm pushed by earnings-crazed Wall Street analysts and non-founder public company CEOs who each work an average of four years before pulling their golden ripcords. In high tech this has led to startups being seen as bricks with which big companies are made bigger.
Sometimes thee bricks are made of technology, sometimes they are made simply of people.
Build or buy? The answer, whenever possible, is now buy-buy-buy because even if the cost of buying is higher the outcome seems to be more assured. My buddy with his startup has solved a problem being faced by other companies, really big companies, so it’s probably easier for one of those to buy his startup than to solve the problem themselves.
And there’s nothing intrinsically wrong with this except it leads to a lot of people being where they aren’t really happy, working off multi-year pay-outs just counting the days until they can get out of the acquiring company that made them rich.
Even those who embrace the quick-and-dirty ethos of almost instant exits seem to do so because they don’t know better. “What’s your exit strategy?” they’ve been asked a thousand times, so they not only have one, their papier-mâché startups are designed from the start with that exit in mind whether it’s the right thing to do or not.
I think this is sad and -- even worse -- I think it is leading to a lot of wasted talent. It cheats us of chances for greatness.
I wish more companies had no exit strategies at all.
Reprinted with permission
Photo Credit: Mopic/Shutterstock
Remember when Bluetooth phone headsets came along and suddenly there were all these people loudly talking to themselves in public? Schizoid behavior became, if not cool, at least somewhat tolerable. Well expect the same experience now that Google Glass is hitting the street, because contrary to nearly any picture you can find of the thing, when you actually use it most of your time is spent looking up and to the right, where the data is. I call it the Google Gaze.
Only time will tell how traffic courts will come to view Google Glass, but having finally tried one I suspect it may end up on that list of things we’re supposed to drive without.
Another suspicion I have on the basis of five minutes wearing the device is that it will be a huge success for Google. That doesn’t mean Google will sell millions, because I don’t think that’s the idea. I expect we’ll see compatible devices shortly from nearly all Android vendors and the real market impact will be from units across a broad range of brands at a wide range of prices.
And that’s fine with Google, because their plan, I’m sure, is to make money on the data, not the device.
I didn’t think much of the gadget until I tried it and then I instantly realized that it would create a whole new class of apps that I’d call sneaky. A sneaky app is one that quietly provides contextual information the way I imagine a brilliant assistant (if I ever had one) would slip me a note with some key piece of data concerning my meeting, talk, class, phone call, negotiation, argument, etc., just at the moment I most need it.
Google Glass and a bunch of sneaky apps will change my mandate from being prepared to being ready, because you can’t prepare for everything but if you can react quickly enough you can be ready for anything.
But there’s still that mindless stare up and to the right, a telltale giveaway that sneaky things are afoot.
Reprinted with permission
This is not a big story, but I find it interesting. Last week American Airlines had its reservations computer system, called SABRE, go offline for most of a day leading to the cancellation of more than 700 flights. Details are still sketchy (here’s American’s video apology) but this is beginning to look like a classic example of a system that became too integrated and a company that was too dependent on a single technology.
To be clear, according to American the SABRE system did not itself fail, what failed was the airline’s access to its own system -- a networking problem. And for further clarification, American no longer owns SABRE, which was spun off several years ago as Sabre Holdings, but the airline is still the system’s largest customer. It’s interesting that Sabre Holdings has yet to say anything about this incident.
American built the first computerized airline reservation system back in the 1950s. It was so far ahead of its time that the airline not only had to write the software, it built the hardware, too. Over the years competing systems were developed at other airlines but some of those, TWA and United included, were splintered versions of SABRE. American has modernized and extended the same code base for over 50 years, which is long even by mainframe standards.
Today SABRE is probably the most intricate and complex system of its type on earth and Sabre Holdings sells SABRE technology to other industries like railroads and trucking companies. In many ways it is hard to dissociate the airline and the computer system, and that seems to be the problem last week.
The American SABRE system includes both a passenger reservation system and a flight operations system. Last week the passenger reservation system became inaccessible because of a networking issue. In addition to reservations, passenger check-in, and baggage tracking, the system also passes weight and location information over to the flight operations system which calculates flap settings and V speeds (target takeoff speeds based on aircraft weight and local weather) for each departure runway and flight combination. The lack of either system will cause flight delays or cancellations, not just because the calculations have to be done by hand, but because the company had become totally dependent on SABRE for running its business.
Without SABRE American literally didn’t know where its airplanes were.
Here’s an example. SABRE has backup computer systems, but all systems are dependent on a microswitch on the nose gear of every American airliner to tell when the plane has left the ground. That microswitch is the dreaded single point of failure. And while it may not be that switch that failed in this instance, it is still a second order failure because if you can’t communicate with the microswitch it may as well be busted.
That’s what happens with such inbred systems that no one person fully understands. But it’s easy to get complacent and American was used to having its systems up and running 24/7. The last significant computer outage at American, in fact, happened back in the 1980s.
That one was caused by a squirrel.
Reprinted with permission
Photo Credit: anderm/Shutterstock
There’s an old joke in which a man asks a woman if she’ll spend the night with him for $1 million? She will. Then he asks if she’ll spend the night with him for $10?
“Do you think I’m a prostitute?” she asks.
“We’ve already established that”, he replies. “This is just a price negotiation”.
Not a great joke, but it came to mind recently when a reader pointed me to a panel discussion last September at the Brookings Institution ironically about STEM education and the shortage of qualified IT workers. Watch the video if you can, especially the part where Microsoft general counsel Brad Smith offers to pay the government $10,000 each for up to 6,000 H-1B visas.
In the joke, this is analogous to the $10 offer. There’s a $1 million offer, too, which is another U.S. visa -- the EB-5 so-called immigrant investor visa, 15,000 of which are available each year and most go unclaimed. Why?
The EB-5 visa is better in many respects than the H-1B. The EB-5, for one thing, is a true immigrant visa leading to U.S. citizenship, where the H-1B, despite misleading arguments to the contrary, is by law a non-immigrant visa good for three or six years after which the worker has to go back to their native country. But the EB-5 requires the immigrant bring with him or her $1 million to be invested locally in an active business.
What’s wrong with that? Can’t Microsoft or any other big tech employer suffering from a severe lack of technical workers just set these immigrants up as little corporations capitalized at $1 million? It must be a better return on investment than the 1.52 percent Redmond made on its billions in cash in 2011. Yet they don’t do it. Why?
The answer is simple economics wrapped up in a huge stinking lie. First of all there is no critical shortage of technical workers. That’s the lie. Here’s a study released last week from the Economic Policy Institute that shows there is no shortage of native U.S. STEM (Science, Technology, Engineering and Mathematics) workers. None at all.
You may recall this lack of a true labor shortage was confirmed empirically in another column of mine looking at tech hiring in Memphis, Tennessee.
If there was such a shortage, Microsoft and other companies would be utilizing EB-5 and other visa programs beyond H-1B. They’d do anything they could to get those desperately needed tech workers.
Some argue that these companies are using H-1Bs to force down local labor rates. Forcing them down how much is becoming clear, in this case thanks to Microsoft’s Brad Smith’s offer. If H-1Bs are each worth $10,000 to Microsoft, the average savings from using an H-1B has to be more than $10,000 plus the risk premium of cheating the system.
But the H-1B program wasn’t started to save money and money savings can’t even be considered as a reason for granting an H-1B according to regulations. Though companies have become pretty brazen about that one when they advertise for only H-1Bs for positions.
An interesting aspect of this story is that some readers have characterized Smith’s offer as a bribe. Maybe it isn’t. Maybe it’s just a gift or it’s intended to cover the true cost to the local and national economies of using an H-1B worker or, more importantly, not using a comparably trained U.S. citizen. But that can hardly be the case given the high unemployment rate among U.S. STEM workers.
What this kind of offer seems to be counting on are the typically terrible math skills of elected government officials; $10,000 ($3,333 per year) is not going to cover the lost income or true cost to society of a computer science graduate taking a lower-paying non-technical position.
What we need, I think, is a much simpler test for whether H-1Bs are actually warranted. The test I would impose is simple: if granting an H-1B results in the loss of a job for a U.S. citizen or green card holder, then that H-1B shouldn’t be granted.
Solving true technical labor shortages or being able to import uniquely skilled foreign workers are one thing, but this supposed H-1B crisis is something else altogether.
Reprinted with permission
Photo Credit: Cartoonresource/Shutterstock
Twenty-first in a series. The final chapter to the first edition, circa 1991, of Robert X. Cringely's Accidental Empires concludes with some predictions prophetic and others, well...
Remember Pogo? Pogo was Doonesbury in a swamp, the first political cartoon good enough to make it off the editorial page and into the high-rent district next to the horoscope. Pogo was a ‘possum who looked as if he was dressed for a Harvard class reunion and who acted as the moral conscience for the first generation of Americans who knew how to read but had decided not to.
The Pogo strip remembered by everyone who knows what the heck I am even talking about is the one in which the little ‘possum says, “We have met the enemy and he is us.” But today’s sermon is based on the line that follows in the next panel of that strip -- a line that hardly anyone remembers. He said, “We are surrounded by insurmountable opportunity.”
We are surrounded by insurmountable opportunity.
Fifteen years ago, a few clever young people invented a type of computer that was so small you could put it on a desk and so useful and cheap to own that America found places for more than 60 million of them. These same young people also invented games to play on those computers and business applications that were so powerful and so useful that we nearly all became computer literate, whether we wanted to or not.
Remember computer literacy? We were all supposed to become computer literate, or something terrible was going to happen to America. Computer literacy meant knowing how to program a computer, but that was before we really had an idea what personal computers could be used for. Once people had a reason for using computers other than to learn how to use computers, we stopped worrying about computer literacy and got on with our spreadsheets.
And that’s where we pretty much stopped.
There is no real difference between an Apple II running VisiCalc and an IBM PS/2 Model 70 running Lotus 1-2-3 version 3.0. Sure, the IBM has 100 times the speed and 1,000 times the storage of the Apple, but they are both just spreadsheet machines. Put the same formulas in the same cells, and both machines will give the same answer.
In 1984, marketing folks at Lotus tried to contact the people who bought the first ten copies of VisiCalc in 1979. Two users could not be reached, two were no longer using computers at all, three were using Lotus 1-2-3, and three were still using VisiCalc on their old Apple IIs. Those last three people were still having their needs met by a five-year-old product.
Marketing is the stimulation of long-term demand by solving customer problems. In the personal computer business, we’ve been solving more or less the same problem for at least 10 years. Hardware is faster and software is more sophisticated, but the only real technical advances in software in the last ten years have been the Lisa’s multitasking operating system and graphical user interface, Adobe’s PostScript printing technology, and the ability to link users together in local area networks.
Ken Okin, who was in charge of hardware engineering for the Lisa and now heads the group designing Sun Microsystems’ newest workstations, keeps a Lisa in his office at Sun just to help his people put their work in perspective. “We still have a multitasking operating system with a graphical user interface and bit-mapped screen, but back then we did it with half a mip [one mip equals one million computer instructions per second] in 1 megabyte of RAM,” he said. “Today on my desk I have basically the same system, but this time I have 16 mips and an editor that doesn’t seem to run in anything less than 20 megabytes of RAM. It runs faster, sure, but what will it do that is different from the Lisa? It can do round windows; that’s all I can find that’s new. Round windows,great!”
There hasn’t been much progress in software for two reasons. The bigger reason is that companies like Microsoft and Lotus have been making plenty of money introducing more and more people to essentially the same old software, so they saw little reason to take risks on radical new technologies. The second reason is that radical new software technologies seem to require equally radical increases in hardware performance, something that is only now starting to take place as 80386- and 68030-based computers become the norm.
Fortunately for users and unfortunately for many companies in the PC business, we are about to break out of the doldrums of personal computing. There is a major shift happening right now that is forcing change on the business. Four major trends are about to shift PC users into warpspeed: standards-based computing, RISC processors, advanced semiconductors, and the death of the mainframe. Hold on!
In the early days of railroading in America, there was no rule that said how far apart the rails were supposed to be, so at first every railroad set its rails a different distance apart, with the result that while a load of grain could be sent from one part of the country to another, the car it was loaded in couldn’t be. It took about thirty years for the railroad industry to standardize on just a couple of gauges of track. As happens in this business, one type of track, called standard gauge, took about 85 percent of the market.
A standard gauge is coming to computing, because no one company -- even IBM -- is powerful enough to impose its way of doing things on all the other companies. From now on, successful computers and software will come from companies that build them from scratch with the idea of working with computers and software made by their competitors. This heretical idea was foisted on us all by a company called Sun Microsystems, which invented the whole concept of open systems computing and has grown into a $4 billion company literally by giving software away.
Like nearly every other venture in this business, Sun got its start because of a Xerox mistake. The Defense Advanced Research Projects Agency wanted to buy Alto workstations, but the Special Programs Group at Xerox, seeing a chance to stick the feds for the entire Alto development budget, marked up the price too high even for DARPA. So DARPA went down the street to Stanford University, where they found a generic workstation based on the Motorola 68000 processor. Designed originally to run on the Stanford University Network, it was called the S.U.N. workstation.
Andy Bechtolscheim, a Stanford graduate student from Germany, had designed the S.U.N. workstation, and since Stanford was not in the business of building computers for sale any more than Xerox was, he tried to interest established computer companies in filling the DARPA order. Bob Metcalfe at 3Com had a chance to build the S.U.N. workstation but turned it down. Bechtolscheim even approached IBM, borrowing a tuxedo from the Stanford drama department to wear for his presentation because his friends told him Big Blue was a very formal operation.
He appeared at IBM wearing the tux, along with a tastefully contrasting pair of white tennis shoes. For some reason, IBM decided not to build the S.U.N. workstation either.
Since all the real computer companies were uninterested in building S.U.N. workstations, Bechtolscheim started his own company, Sun Microsystems. His partners were Vinod Khosla and Scott McNealy, also Stanford grad students, and Bill Joy, who came from Berkeley. The Stanford contingent came up with the hardware design and a business plan, while Joy, who had played a major role in writing a version of the Unix operating system at Berkeley, was Mr. Software.
Sun couldn’t afford to develop proprietary technology, so it didn’t develop any. The workstation design itself was so bland that Stanford University couldn’t find any basis for demanding royalties from the start-up. For networking they embraced Bob Metcalfe’s Ethernet, and for storage they used off-the-shelf hard disk drives built around the Small Computer System Interface (SCSI) specification. For software, they used Bill Joy’s Berkeley Unix. Berkeley Unix worked well on a VAX, so Bechtolscheim and friends just threw away the VAX and replaced it with cheaper hardware. The languages, operating system, networking, and windowing systems were all standard.
Sun learned to establish de facto standards by giving source code away. It was a novel idea, born of the Berkeley Unix community, and rather in keeping with the idea that for some boys, a girl’s attractiveness is directly proportional to her availability. For example, Sun virtually gave away licenses for its Network Filing System networking scheme, which had lots of bugs and some severe security problems, but it was free and so became a de facto standard virtually overnight. Even IBM licensed NFS. This giving away of source code allowed Sun to succeed, first by being the standard setter and then following up with the first hardware to support that standard.
By 1985, Sun had defined a new category of computer, the engineering workstation, but competitors were starting to catch on and catch up to Sun. The way to remain ahead of the industry, they decided, was to increase performance steadily, which they could do by using a RISC processor -- except that there weren’t any RISC processors for sale in 1985.
RISC is an old IBM idea called Reduced Instruction Set Computing. RISC processors were incredibly fast devices that gained their speed from a simple internal architecture that implements only a few computer instructions. Where a Complex Instruction Set Computer (CISC) might have a special “walk across the room but don’t step on the dog” instruction, RISC processors can usually get faster performance by using several simpler instructions: walk-walk-step over-walk-walk.
RISC processors are cheaper to build because they are smaller and more can be fit on one piece of silicon. And because they have fewer transistors (often under 100,000), yields are higher too. It’s easier to increase the clock speed of RISC chips, making them faster. It’s easier to move RISC designs from one semiconductor technology to a faster one. And because RISC forces both hardware and software designers to keep it simple, stupid, they tend to be more robust.
Sun couldn’t interest Intel or Motorola in doing one. Neither company wanted to endanger its lucrative CISC processor business. So Bill Joy and Dave Patterson designed a processor of their own in 1985, called SPARC. By this time, both Intel and Motorola had stopped allowing other semiconductor companies to license their processor designs, thus keeping all the high-margin sales in Santa Clara and Schaumberg, Illinois. This, of course, pissed off the traditional second source manufacturers, so Sun signed up those companies to do SPARC.
Since Sun designed the SPARC processor, it could buy them more cheaply than any other computer maker. Sun engineers knew, too, when higher-performance versions of the SPARC were going to be introduced. These facts of life have allowed Sun to dominate the engineering workstation market, as well as making important inroads into other markets formerly dominated by IBM and DEC.
Sun scares hardware and software competitors alike. The company practically gives away system software, which scares companies like Microsoft and Adobe that prefer to sell it. The industry is abuzz with software consortia set up with the intention to do better standards-based software than Sun does but to sell it, not give it away.
Sun also scares entrenched hardware competitors like DEC and IBM by actually encouraging cloning of its hardware architecture, relying on a balls-to-the-wall attitude that says Sun will stay in the high-margin leading edge of the product wave simply by bringing newer, more powerful SPARC systems to market sooner than any of its competitors can.
DEC has tried, and so far failed, to compete with Sun, using a RISC processor built by MIPS Computer Systems. Figuring if you can’t beat them, join them, HP has actually allied with Sun to do software. IBM reacted to Sun by building a RISC processor of its own too. Big Blue spent more on developing its Sun killer, the RS/6000, than it would have cost to buy Sun Microsystems outright. The RS/6000, too, is a relative failure.
Why did Bill Gates, in his fourth consecutive hour of sitting in a hotel bar in Boston, sinking ever deeper into his chair, tell the marketing kids from Lotus Development that IBM would be out of business in seven years? What does Bill Gates know that we don’t know?
Bill Gates knows that the future of computing will unfold on desktops, not in mainframe computer rooms. He knows that IBM has not had a very good handle on the desktop software market. He thinks that without the assistance of Microsoft, IBM will eventually forfeit what advantage it currently has in personal computers.
Bill Gates is a smart guy.
But you and I can go even further. We can predict the date by which the old IBM -- IBM the mainframe computing giant -- will be dead. We can predict the very day that the mainframe computer era will end.
Mainframe computing will die with the coming of the millennium. On December 31,1999, right at midnight, when the big ball drops and people are kissing in New York’s Times Square, the era of mainframe computing will be over.
Mainframe computing will end that night because a lot of people a long time ago made a simple mistake. Beginning in the 1950s, they wrote inventory programs and payroll programs for mainframe computers, programs that process income tax returns and send out welfare checks—programs that today run most of this country. In many ways those programs have become our country. And sometime during those thirty-odd years of being moved from one mainframe computer to another, larger mainframe computer, the original program listings, the source code for thousands of mainframe applications, were just thrown away. We have the object code—the part of the program that machines can read—which is enough to move the software from one type of computer to another. But the source code—the original program listing that people can read, that has details of how these programs actually work—is often long gone, fallen through a paper shredder back in 1967. There is mainframe software in this country that cost at least $50 billion to develop for which no source code exists today.
This lack of commented source code would be no big deal if more of those original programmers had expected their programs to outlive them. But hardly any programmer in 1959 expected his payroll application to be still cutting checks in 1999, so nobody thought to teach many of these computer programs what to do when the calendar finally says it’s the year 2000. Any program that prints a date on a check or an invoice, and that doesn’t have an algorithm for dealing with a change from the twentieth to the twenty-first century, is going to stop working. I know this doesn’t sound like a big problem, but it is. It’s a very big problem.
Looking for a growth industry in which to invest? Between now and the end of the decade, every large company in America either will have to find a way to update its mainframe software or will have to write new software from scratch. New firms will appear dedicated to the digital archaeology needed to update old software. Smart corporations will trash their old software altogether and start over. Either solution is going to cost lots more than it did to write the software in the first place. And all this new mainframe software will have one thing in common: it won’t run on a mainframe. Mainframe computers are artifacts of the 1960s and 1970s. They are kept around mainly to run old software and to gladden the hearts of MIS directors who like to think of themselves as mainframe gods. Get rid of the old software, and there is no good reason to own a mainframe computer. The new software will run faster, more reliably, and at one-tenth the cost on a desktop workstation, which is why the old IBM is doomed.
“But workstations will never run as reliably as mainframes,” argue the old-line corporate computer types, who don’t know what they are talking about. Workstations today can have as much computing power and as much data storage as mainframes. Ten years from now, they’ll have even more. And by storing copies of the same corporate data on duplicated machines in separate cities or countries and connecting them by high-speed networks, banks, airlines, and all the other other big transaction processors that still think they’d die without their mainframe computers will find their data are safer than they are now, trapped inside one or several mainframes, sitting in the same refrigerated room in Tulsa, Oklahoma.
Mainframes are old news, and the $40 billion that IBM brings in each year for selling, leasing, and servicing mainframes will be old news too by the end of the decade.
There is going to be a new IBM, I suppose, but it probably won’t be the company we think of today. The new IBM should be a quarter the size of the current model, but I doubt that current management has the guts to make those cuts in time. The new IBM is already at a disadvantage, and it may not survive, with or without Bill Gates.
So much for mainframes. What about personal computers? PCs, at least as we know them today, are doomed too. That’s because the chips are coming.
While you and I were investing decades alternately destroying brain cells and then regretting their loss, Moore’s Law was enforcing itself up and down Silicon Valley, relentlessly demanding that the number of transistors on a piece of silicon double every eighteen months, while the price stayed the same. Thirty-five years of doubling and redoubling, thrown together with what the lady at the bank described to me as “the miracle of compound interest,” means that semiconductor performance gains are starting to take off. Get ready for yet another paradigm shift in computing.
Intel’s current top-of-the-line 80486 processor has 1.2 million transistors, and the 80586, coming in 1992, will have 3 million transistors. Moore’s Law has never let us down, and my sources in the chip business can think of no technical reason why it should be repealed before the end of the decade, so that means we can expect to see processors with the equivalent of 96 million transistors by the year 2000. Alternately, we’ll be able to buy a dowdy old 80486 processor for $11.
No single processor that can be imagined today needs 96 million transistors. The reality of the millennium processor is that it will be a lot smaller than the processors of today, and smaller means faster, since electrical signals don’t have to travel as far inside the chip. In keeping with the semiconductor makers’ need to add value continually to keep the unit price constant, lots of extra circuits will be included in the millennium processor— circuits that have previously been on separate plug-in cards. Floppy disk controllers, hard disk controllers, Ethernet adapters, and video adapters are already leaving their separate circuit cards and moving as individual chips onto PC motherboards. Soon they will leave the motherboard and move directly into the microprocessor chip itself.
Hard disk drives will be replaced by memory chips, and then those chips too will be incorporated in the processor. And there will still be space and transistors left over—space enough eventually to gang dozens of processors together on a single chip.
Apple’s Macintosh, which used to have more than seventy separate computer chips, is now down to fewer than thirty. In two years, a Macintosh will have seven chips. Two years after that, the Mac will be two chips, and Apple won’t be a computer company anymore. By then Apple will be a software company that sells operating systems and applications for single-chip computers made by Motorola. The MacMotorola chips themselves may be installed in desktops, in notebooks, in television sets, in cars, in the wiring of houses, even in wristwatches. Getting the PC out of its box will fuel the next stage of growth in computing. Your 1998 Macintosh may be built by Nissan and parked in the driveway, or maybe it will be a Swatch.
Forget about keyboards and mice and video displays, too, for the smallest computers, because they’ll talk to you. Real-time, speaker-independent voice recognition takes a processor that can perform 100 million computer instructions per second. That kind of performance, which was impossible at any cost in 1980, will be on your desktop in 1992 and on your wrist in 1999, when the hardware will cost $625. That’s for the Casio version; the Rolex will cost considerably more.
That’s the good news. The bad news comes for companies that today build PC clones. When the chip literally becomes the computer, there will be no role left for computer manufacturers who by then would be slapping a chip or two inside a box with a battery and a couple of connectors. Today’s hardware companies will be squeezed out long before then, unable to compete with the economics of scale enjoyed by the semiconductor makers. Microcomputer companies will survive only by becoming resellers, which means accepting lower profit margins and lower expectations, or by going into the software business.
On Thursday night, April 12, 1991, eight top technical people from IBM had a secret meeting in Cupertino, California, with John Sculley, chairman of Apple Computer. Sculley showed them an IBM PS/2 Model 70 computer running what appeared to be Apple’s System 7.0 software. What the computer was actually running was yet another Apple operating system code-named Pink, intended to be run on a number of different types of microprocessors. The eight techies were there to help decide whether to hitch IBM’s future to Apple’s software.
Sculley explained to the IBMers that he had realized Apple could never succeed as a hardware company. Following the model of Novell, the network operating system company, Apple would have to live or die by its software. And living, to a software company, means getting as many hardware companies as possible to use your operating system. IBM is a very big hardware company.
Pink wasn’t really finished yet, so the demo was crude, the software was slow, the graphics were especially bad, but it worked. The IBM experts reported back to Boca Raton that Apple was onto something.
The talks with Apple resumed several weeks later, taking place sometimes on the East Coast and sometimes on the West. Even the Apple negotiators scooted around the country on IBM jets and registered in hotels under assumed names so the talks could remain completely secret.
Pink turned out to be more than an operating system. It was also an object-oriented development environment that had been in the works at Apple for three years, staffed with a hundred programmers. Object orientation was a concept invented in Norway but perfected at Xerox PARC to allow large programs to be built as chunks of code called objects that could be mixed and matched to create many different types of applications. Pink would allow the same objects to be used on a PC or a mainframe, creating programs that could be scaled up or down as needed. Combining objects would take no time at all either, allowing applications to be written faster than ever. Writing Pink programs could be as easy as using a mouse to move object icons around on a video screen and then linking them together with lines and arrows.
IBM had already started its own project in partnership with Metaphor Computer Systems to create an object-oriented development environment called Patriot. Patriot, which was barely begun when Apple revealed the existence of Pink to IBM, was expected to take 500 man-years to write. What IBM would be buying in Pink, then, was a 300 man-year head start.
In late June, the two sides reached an impasse, and talks broke down. Jim Cannavino, head of IBM’s PC operation, reported to IBM chairman John Akers that Apple was asking for too many concessions. “Get back in there, and do whatever it takes to make a deal,” Akers ordered, sounding unlike any previous chairman of IBM. Akers knew that the long-term survival of IBM was at stake.
On July 3, the two companies signed a letter of intent to form a jointly owned software company that would continue development of Pink for computers of all sizes. To make the deal appear as if it went two ways, Apple also agreed to license the RISC processor from IBM’s RS/6000 workstation, which would be shrunk from five chips down to two by Motorola, Apple’s longtime supplier of microprocessors. Within three years, Apple and IBM would be building computers using the same processor and running the same software—software that would look like Apple’s Macintosh, without even a hint of IBM’s Common User Access interface or its Systems Application Architecture programming guidelines. Those sacred standards of IBM were effectively dead because Apple rightly refused to be bound by them. Even IBM had come to realize that market share makes standards; companies don’t. The only way to succeed in the future will be by working seamlessly with all types of computers, even if they are made by competitors.
This deal with Apple wasn’t the first time that IBM had tried to make a quantum leap in system software. In 1988, Akers had met Steve Jobs at a birthday party for Katherine Graham, owner of Newsweek and the Washington Post. Jobs took a chance and offered Akers a demo of NeXTStep, the object-oriented interface development system used in his NeXT Computer System. Blown away by the demo, Akers cut the deal with NeXT himself and paid $10 million for a NeXTStep license.
Nothing ever came of NeXTStep at IBM because it could produce only graphical user interfaces, not entire applications, and because the programmers at IBM couldn’t figure how to fit it into their raison d’etre—SAA. But even more important, the technical people of IBM were offended that Akers had imposed outside technology on them from above. They resented NeXTStep and made little effort to use it. Bill Gates, too, had argued against NeXTStep because it threatened Microsoft. (When InfoWorld’s Peggy Watt asked Gates if Microsoft would develop applications for the NeXT computer, he said, “Develop for it? I’ll piss on it.”)
Alas, I’m not giving very good odds that Steve Jobs will be the leader of the next generation of personal computing.
The Pink deal was different for IBM, though, in part because NeXTStep had failed and the technical people at IBM realized they’d thrown away a three-year head start. By 1991, too, IBM was a battered company, suffering from depressed earnings and looking at its first decline in sales since 1946. A string of homegrown software fiascos had IBM so unsure of what direction to move in that the company had sunk to licensing nearly every type of software and literally throwing it at customers, who could mix and match as they liked. “Want an imaging model? Well, we’ve got PostScript, GPI, and X-Windows—take your pick.” Microsoft and Bill Gates were out of the picture, too, and IBM was desperate for new software partnerships.
IBM has 33,000 programmers on its payroll but is so far from leading the software business (and knows it) that it is betting the company on the work of 100 Apple programmers wearing T-shirts in Mountain View, California.
Apple and IBM, caught between the end of the mainframe and the ultimate victory of the semiconductor makers, had little choice but to work together. Apple would become a software company, while IBM would become a software and high-performance semiconductor company. Neither company was willing to risk on its own the full cost of bringing to market the next-generation computing environment ($5 billion, according to Cringely’s Second Law). Besides, there weren’t any other available allies, since nearly every other computer company of note had already joined either the ACE or SPARC alliances that were Apple and IBM’s competitors for domination of future computing.
ACE, the Advanced Computing Environment consortium, is Microsoft’s effort to control the future of computing and Compaq’s effort to have a future in computing. Like Apple-IBM, ACE is a hardware-software development project based on linking Microsoft’s NT (New Technology) operating system to a RISC processor, primarily the R-4000, from MIPS Computer Systems. In fact, ACE was invented as a response to IBM’s Patriot project before Apple became involved with IBM.
ACE has the usual bunch of thirty to forty Microsoft licensees signed up, though only time will tell how many of these companies will actually offer products that work with the MIPS/ Microsoft combination.
But remember that there is only room for two standards; one of these efforts is bound to fail.
In early 1970, my brother and I were reluctant participants in the first draft lottery. I was hitchhiking in Europe at the time and can remember checking nearly every day in the International Herald Tribune for word of whether I was going to Vietnam. I finally had to call home for the news. My brother and I are three years apart in age, but we were in the same lottery because it was the first one, meant to make Richard Nixon look like an okay guy. For that year only, every man from 18 to 26 years old had his birthday thrown in the same hopper. The next year, and every year after, only the 18-year-olds would have their numbers chosen. My number was 308. My brother’s number was 6.
Something very similar to what happened to my brother and me with the draft also happened to nearly everyone in the personal computer business during the late 1970s. Then, there were thousands of engineers and programmers and would-be entrepreneurs who had just been waiting for something like the personal computer to come along. They quit their jobs, quit their schools, and started new hardware and software companies all over the place. Their exuberance, sheer numbers, and willingness to die in human wave technology attacks built the PC business, making it what it is today.
But today, everyone who wants to be in the PC business is already in it. Except for a new batch of kids who appear out of school each year, the only new blood in this business is due to immigration. And the old blood is getting tired—tired of failing in some cases or just tired of working so hard and now ready to enjoy life. The business is slowing down, and this loss of energy is the greatest threat to our computing future as a nation. Forget about the Japanese; their threat is nothing compared to this loss of intellectual vigor.
Look at Ken Okin. Ken Okin is a great hardware engineer. He worked at DEC for five years, at Apple for four years, and has been at Sun for the last five years. Ken Okin is the best-qualified computer hardware designer in the world, but Ken Okin is typical of his generation. Ken Okin is tired.
“I can remember working fifteen years ago at DEC,” Okin said. “I was just out of school, it was 1:00 in the morning, and there we were, testing the hardware with all these logic analyzers and scopes, having a ball. ‘Can you believe they are paying for us to play?’ we asked each other. Now it’s different. If I were vested now, I don’t know if I would go or stay. But I’m not vested—that will take another four years—and I want my fuck you money.”
Staying in this business for fuck you money is staying for the wrong reason.
Soon, all that is going to remain of the American computer industry will be high-performance semiconductors and software, but I’ve just predicted that we won’t even have the energy to stay ahead in software. Bummer. I guess this means it’s finally my turn to add some value and come up with a way out of this impending mess.
The answer is an increase in efficiency. The era of start-ups built this business, but we don’t have the excess manpower or brainpower anymore to allow nineteen out of twenty companies to fail. We have to find a new business model that will provide the same level of reward without the old level of risk, a model that can produce blockbuster new applications without having to create hundreds or thousands of tiny technical bureaucracies run by unhappy and clumsy administrators as we have now. We have to find a model that will allow entrepreneurs to cash out without having to take their companies public and pretend that they ever meant more than working hard for five years and then retiring. We started out, years ago, with Dan Fylstra’s adaptation of the author-publisher model, but that is not a flexible or rich enough model to support the complex software projects of the next decade. Fortunately, there is already a business model that has been perfected and fine-tuned over the past seventy years, a business model that will serve us just fine. Welcome to Hollywood.
The world eats dinner to U.S. television. The world watches U.S. movies. It’s all just software, and what works in Hollywood will work in Silicon Valley too. Call it the software studio.
Today’s major software companies are like movie studios of the 1930s. They finance, produce, and distribute their own products. Unfortunately, it’s hard to do all those things well, which is why Microsoft reminds me of Disney from around the time of The Love Bug. But the movie studio of the 1990s is different; it is just a place where directors, producers, and talent come and go—only the infrastructure stays. In the computer business, too, we’ve held to the idea that every product is going to live forever. We should be like the movies and only do sequels of hits. And you don’t have to keep the original team together to do a sequel. All you have to do is make sure that the new version can read all the old product files and that it feels familiar.
The software studio acknowledges that these start-up guys don’t really want to have to create a large organization. What happens is that they reinvent the wheel and end up functioning in roles they think they are supposed to like, but most of them really don’t. And because they are performing these roles -- pretending to be CEOs -- they aren’t getting any programming done. Instead, let’s follow a movie studio model, where there is central finance, administration, manufacturing, and distribution, but nearly everything else is done under contract. Nearly everyone -- the authors, the directors, the producers -- works under contract. And most of them take a piece of the action and a small advance.
There are many advantages to the software studio. Like a movie studio, there are established relationships with certain crafts. This makes it very easy to get a contract programmer, writer, marketer, etc. Not all smart people work at Apple or Sun or Microsoft. In fact, most smart people don’t work at any of those companies. The software studio would allow program managers to find the very best person for a particular job. A lot of the scrounging is eliminated. The programmers can program. The would-be moguls can either start a studio of their own or package ideas and talent together just like independent movie producers do today. They can become minimoguls and make a lot of money, but be responsible for at most a few dozen people. They can be Steven Spielberg or George Lucas to Microsoft’s MGM or Lotus’s Paramount.
We’re facing a paradigm shift in computing, which can be viewed either as a catastrophe or an opportunity. Mainframes are due to die, and PCs and workstations are colliding. Processing power is about to go off the scale, though we don’t seem to know what to do with it. The hardware business is about to go to hell, and the people who made all this possible are fading in the stretch.
What a wonderful time to make money!
Here’s my prescription for future computing happiness. The United States is losing ground in nearly every area of computer technology except software and microprocessors. And guess what? About the only computer technologies that are likely to show substantial growth in the next decade are -- software and microprocessors! The rest of the computer industry is destined to shrink.
Japan has no advantage in software, and nothing short of a total change of national character on their part is going to change that significantly. One really remarkable thing about Japan is the achievement of its craftsmen, who are really artists, trying to produce perfect goods without concern for time or expense. This effect shows, too, in many large-scale Japanese computer programming projects, like their work on fifth-generation knowledge processing. The team becomes so involved in the grandeur of their concept that they never finish the program. That’s why Japanese companies buy American movie studios: they can’t build competitive operations of their own. And Americans sell their movie studios because the real wealth stays right here, with the creative people who invent the software.
The hardware business is dying. Let it. The Japanese and Koreans are so eager to take over the PC hardware business that they are literally trying to buy the future. But they’re only buying the past.
Reprinted with permission
Photo Credit: Anneka/Shutterstock
Twentieth in a series. "Market research firms tend to serve the same function for the PC industry that a lamppost does for a drunk", writes Robert X. Cringely in this installment of 1991 classic Accidental Empires. Context is universal forecast that OS/2 would overtake MS-DOS. Analysts were wrong then, much as they are today making predictions about smartphones, tablets and PCs. The insightful chapter also explains vaporware and product leak tactics IBM pioneered, Microsoft refined and Apple later adopted.
In Prudhoe Bay, in the oilfields of Alaska’s North Slope, the sun goes down sometime in late November and doesn’t appear again until January, and even then the days are so short that you can celebrate sunrise, high noon, and sunset all with the same cup of coffee. The whole day looks like that sliver of white at the base of your thumbnail.
It’s cold in Prudhoe Bay in the wintertime, colder than I can say or you would believe -- so cold that the folks who work for the oil companies start their cars around October and leave them running twenty-four hours a day clear through to April just so they won’t freeze up.
Idling in the seemingly endless dark is not good for a car. Spark plugs foul and carburetors gum up. Gas mileage goes completely to hell, but that’s okay; they’ve got the oil. Keeping those cars and trucks running night and pseudoday means that there are a lot of crummy, gas-guzzling, smoke-spewing vehicles in Prudhoe Bay in the winter, but at least they work.
Nobody ever lost his job for leaving a car running overnight during a winter in Prudhoe Bay.
And it used to be that nobody ever lost his job for buying computers from IBM.
But springtime eventually comes to Alaska. The tundra begins to melt, the days get longer than you can keep your eyes open, and the mosquitoes are suddenly thick as grass. It’s time for an oil change and to give that car a rest. When the danger’s gone -- when the environment has improved to a point where any car can be counted on to make it through the night, when any tool could do the job -- then efficiency and economy suddenly do become factors. At the end of June in Prudhoe Bay, you just might get in trouble for leaving a car running overnight, if there was a night, which there isn’t.
IBM built its mainframe computer business on reliable service, not on computing performance or low prices. Whether it was in Prudhoe Bay or Houston, when the System 370/168 in accounting went down, IBM people were there right now to fix it and get the company back up and running. IBM customer hand holding built the most profitable corporation in the world. But when we’re talking about a personal computer rather than a mainframe, and it’s just one computer out of a dozen, or a hundred, or a thousand in the building, then having that guy in the white IBM coveralls standing by eventually stops being worth 30 percent or 50 percent more.
That’s when it’s springtime for IBM.
IBM’s success in the personal computer business was a fluke. A company that was physically unable to invent anything in less than three years somehow produced a personal computer system and matching operating system in one year. Eighteen months later, IBM introduced the PC-XT, a marginally improved machine with a marginally improved operating system. Eighteen months after that, IBM introduced its real second-generation product, the PC-AT, with five times the performance of the XT.
From 1981 to 1984, IBM set the standard for personal computing and gave corporate America permission to take PCs seriously, literally creating the industry we know today. But after 1984, IBM lost control of the business.
Reality caught up with IBM’s Entry Systems Division with the development of the PC-AT. From the AT on, it took IBM three years or better to produce each new line of computers. By mainframe standards, three years wasn’t bad, but remember that mainframes are computers, while PCs are just piles of integrated circuits. PCs follow the price/performance curve for semiconductors, which says that performance has to double every eighteen months. IBM couldn’t do that anymore. It should have been ready with a new line of industry-leading machines by 1986, but it wasn’t. It was another company’s turn.
Compaq Computer cloned the 8088-based IBM PC in a year and cloned the 80286-based PC-AT in six months. By 1986, IBM should have been introducing its 80386-based machine, but it didn’t have one. Compaq couldn’t wait for Big Blue and so went ahead and introduced its DeskPro 386. The 386s that soon followed from other clone makers were clones of the Compaq machine, not clones of IBM. Big Blue had fallen behind the performance curve and would never catch up. Let me say that a little louder: ibm will never catch up.
IBM had defined MS-DOS as the operating system of choice. It set a 16-bit bus standard for the PC-AT that determined how circuit cards from many vendors could be used in the same machine. These were benevolent standards from a market leader that needed the help of other hardware and software companies to increase its market penetration. That was all it took. Once IBM could no longer stay ahead of the performance curve, the IBM standards still acted as guidelines, so clone makers could take the lead from there, and they did. IBM saw its market share slowly start to fall.
But IBM was still the biggest player in the PC business, still had the the greatest potential for wreaking technical havoc, and knew better than any other company how to slow the game down to a more comfortable pace. Here are some market control techniques refined by Big Blue over the years.
Technique No. 1. Announce a direction, not a product. This is my favorite IBM technique because it is the most efficient one from Big Blue’s perspective. Say the whole computer industry is waiting for IBM to come out with its next-generation machines, but instead the company makes a surprise announcement: “Sorry, no new computers this year, but that’s because we are committing the company to move toward a family of computers based on gallium arsenide technology [or Josephson junctions, or optical computing, or even vegetable computing -- it doesn't really matter]. Look for these powerful new computers in two years.”
“Damn, I knew they were working on something big,” say all of IBM’s competitors as they scrap the computers they had been planning to compete with the derivative machines expected from IBM.
Whether IBM’s rutabaga-based PC ever appears or not, all IBM competitors have to change their research and development focus, looking into broccoli and parsnip computing, just in case IBM is actually onto something. By stating a bold change of direction, IBM looks as if it’s grasping the technical lead, when in fact all it’s really doing is throwing competitors for a loop, burning up their R&D budgets, and ultimately making them wait up to two years for a new line of computers that may or may not ever appear. (IBM has been known, after all, to say later, “Oops, that just didn’t work out,” as they did with Josephson junction research.) And even when the direction is for real, the sheer market presence of IBM makes most other companies wait for Big Blue’s machines to appear to see how they can make their own product lines fit with IBM’s.
Whenever IBM makes one of these statements of direction, it’s like the yellow flag coming out during an auto race. Everyone continues to drive, but nobody is allowed to pass.
IBM’s Systems Application Architecture (SAA) announcement of 1987, which was supposed to bring a unified programming environment, user interface, and applications to most of its mainframe, minicomputer, and personal computer lines by 1989, was an example of such a statement of direction. SAA was for real, but major parts of it were still not ready in 1991.
Technique No. 2. Announce a real product, but do so long before you actually expect to deliver, disrupting the market for competitive products that are already shipping.
This is a twist on Technique No. 1 though aimed at computer buyers rather than computer builders. Because performance is always going up and prices are always going down, PC buyers love to delay purchases, waiting for something better. A major player like IBM can take advantage of this trend, using it to compete even when IBM doesn’t yet have a product of its own to offer.
In the 1983-1985 time period, for example, Apple had the Lisa and the Macintosh, VisiCorp had VisiOn, its graphical computing environment for IBM PCs, Microsoft had shipped the first version of Windows, Digital Research produced GEM, and a little company in Santa Monica called Quarterdeck Office Systems came out with a product called DesQ. All of these products -- even Windows, which came from Microsoft, IBM’s PC software partner -- were perceived as threats by IBM, which had no equivalent graphical product. To compete with these graphical environments that were already available, IBM announced its own software that would put pop-up windows on a PC screen and offer easy switching from application to application and data transfer from one program to another. The announcement came in the summer of 1984 at the same time the PC-AT was introduced. They called the new software TopView and said it would be available in about a year.
DesQ had been the hit of Comdex, the computer dealers’ convention held in Atlanta in the spring of 1984. Just after the show, Quarterdeck raised $5.5 million in second-round venture funding, moved into new quarters just a block from the beach, and was happily shipping 2,000 copies of DesQ per month. DesQ had the advantage over most of the other windowing systems that it worked with existing MS-DOS applications. DesQ could run more than one application at a time, too -- something none of the other systems (except Apple’s Lisa) offered. Then IBM announced TopView. DesQ sales dropped to practically nothing, and the venture capitalists asked Quarterdeck for their money back.
All the potential DesQ buyers in the world decided in a single moment to wait for the truly incredible software IBM promised. They forgot, of course, that IBM was not particularly noted for incredible software -- in fact, IBM had never developed PC software entirely on its own before. TopView was true Blue -- written with no help from Microsoft.
The idea of TopView hurt all the other windowing systems and contributed to the death of Vision and DesQ. Quarterdeck dropped from fifty employees down to thirteen. Terry Myers, co-founder of Quarterdeck and one of the few women to run a PC software company, borrowed $20,000 from her mother to keep the company afloat while her programmers madly rewrote DesQ to be compatible with the yet-to-be-delivered TopView. They called the new program DesqView.
When TopView finally appeared in 1985, it was a failure. The product was slow and awkward to use, and it lived up to none of the promises IBM made. You can still buy TopView from IBM, but nobody does; it remains on the IBM product list strictly because removing it would require writing off all development expenses, which would hurt IBM’s bottom line.
Technique No. 3. Don’t announce a product, but do leak a few strategic hints, even if they aren’t true.
IBM should have introduced a follow-on to the PC-AT in 1986 but it didn’t. There were lots of rumors, sure, about a system generally referred to as the PC-2, but IBM staunchly refused to comment. Still, the PC-2 rumors continued, accompanied by sparse technical details of a machine that all the clone makers expected would include an Intel 80386 processor. And maybe, the rumors continued, the PC-2 would have a 32-bit bus, which would mean yet another technical standard for add-in circuit cards.
It would have been suicide for a clone maker to come out with a 386 machine with its own 32-bit bus in early 1986 if IBM was going to announce a similar product a month or three later, so the clone makers didn’t introduce their new machines. They waited and waited for IBM to announce a new family of computers that never came. And during the time that Compaq and Dell, and AST, and the others were waiting for IBM to make its move, millions of PC-ATs were flowing into Fortune 1000 corporations, still bringing in the big bucks at a time when they shouldn’t have still been viewed as top-of-the-line machines.
When Compaq Computer finally got tired of waiting and introduced its own DeskPro 386, it was careful to make its new machine use the 16-bit circuit cards intended for the PC-AT. Not even Compaq thought it could push a proprietary 32-bit bus standard in competition with IBM. The only 32-bit connections in the Compaq machine were between the processor and main memory; in every other respect, it was just like a 286.
Technique No. 4. Don’t support anybody else’s standards; make your own.
The original IBM Personal Computer used the PC-DOS operating system at a time when most other microcomputers used in business ran CP/M. The original IBM PC had a completely new bus standard, while nearly all of those CP/M machines used something called the S-100 bus. Pushing a new operating system and a new bus should have put IBM at a disadvantage, since there were thousands of CP/M applications and hundreds of S-100 circuit cards, and hardly any PC-DOS applications and less than half a dozen PC circuit cards available in 1981. But this was not just any computer start-up; this was IBM, and so what would normally have been a disadvantage became IBM’s advantage. The IBM PC killed CP/M and the S-100 bus and gave Big Blue a full year with no PC-compatible competitors.
When the rest of the world did its computer networking with Ethernet, IBM invented another technology, called Token Ring. When the rest of the world thought that a multitasking workstation operating system meant Unix, IBM insisted on OS/2, counting on its influence and broad shoulders either to make the IBM standard a de facto standard or at least to interrupt the momentum of competitors.
Technique No. 5. Announce a product; then say you don’t really mean it.
IBM has always had a problem with the idea of linking its personal computers together. PCs were cheaper than 3270 terminals, so IBM didn’t want to make it too easy to connect PCs to its mainframes and risk hurting its computer terminal business. And linked PCs could, by sharing data, eventually compete with minicomputer or mainframe time-sharing systems, which were IBM’s traditional bread and butter. Proposing an IBM standard for networking PCs or embracing someone else’s networking standard was viewed in Armonk as a risky proposition. By the mid-1980s, though, other companies were already moving forward with plans to network IBM PCs, and Big Blue just couldn’t stand the idea of all that money going into another company’s pocket.
In 1985, then, IBM announced its first networking hardware and software for personal computers. The software was called the PC Network (later the PC LAN Program). The hardware was a circuit card that fit in each PC and linked them together over a coaxial cable, transferring data at up to 2 million bits per second. IBM sold $200 million worth of these circuit cards over the next couple of years. But that wasn’t good enough (or bad enough) for IBM, which announced that the network cards, while they are a product, weren’t part of an IBM direction. IBM’s true networking direction was toward another hardware technology called Token Ring, which would be available, as I’m sure you can predict by now, in a couple of years.
Customers couldn’t decide whether to buy the hardware that IBM was already selling or to wait for Token Ring, which would have higher performance. Customers who waited for Token Ring were punished for their loyalty, since IBM, which had the most advanced semiconductor plants in the world, somehow couldn’t make enough Token Ring adapters to meet demand until well into 1990. The result was that IBM lost control of the PC networking business.
The company that absolutely controls the PC networking business is headquartered at the foot of a mountain range in Provo, Utah, just down the street from Brigham Young University. Novell Inc. runs the networking business today as completely as IBM ran the PC business in 1983. A lot of Novell’s success has to do with the technical skills of those programmers who come to work straight out of BYU and who have no idea how much money they could be making in Silicon Valley. And a certain amount of its success can be traced directly to the company’s darkest moment, when it was lucky enough to nearly go out of business in 1981.
Novell Data Systems, as it was called then, was a struggling maker of not very good CP/M computers. The failing company threw the last of its money behind a scheme to link its computers together so they could share a single hard disk drive. Hard disks were expensive then, and a California company, Corvus Systems, had already made a fortune linking Apple IIs together in a similar fashion. Novell hoped to do for CP/M computers what Corvus had done for the Apple II.
In September 1981, Novell hired three contract programmers to devise the new network hardware and software. Drew Major, Dale Neibaur, and Kyle Powell were techies who liked to work together and hired out as a unit under the name Superset. Superset -- three guys who weren’t even Novell employees -- invented Novell’s networking technology and still direct its development today. They still aren’t Novell employees.
Companies like Ashton-Tate and Lotus Development ran into serious difficulties when they lost their architects. Novell and Microsoft, which have retained their technical leaders for over a decade, have avoided such problems.
In 1981, networking meant sharing a hard disk drive but not sharing data between microcomputers. Sure, your Apple II and my Apple II could be linked to the same Corvus 10-megabyte hard drive, but your data would be invisible to my computer. This was a safety feature, because the microcomputer operating systems of the time couldn’t handle the concept of shared data.
Let’s say I am reading the text file that contains your gothic romance just when you decide to add a juicy new scene to chapter 24. I am reading the file, adding occasional rude comments, when you grab the file and start to add text. Later, we both store the file, but which version gets stored: the one with my comments, or the one where Captain Phillips finally does the nasty with Lady Margaret? Who knows?
What CP/M lacked was a facility for directory locking, which would allow only one user at a time to change a file. I could read your romance, but if you were already adding text to it, directory locking would keep me from adding any comments. Directory locking could be used to make some data read only, and could make some data readable only by certain users. These were already important features in multiuser or networked systems but not needed in CP/M, which was written strictly for a single user.
The guys from Superset added directory locking to CP/M, they improved CP/M’s mechanism for searching the disk directory, and they moved all of these functions from the networked microcomputer up to a specialized processor that was at the hard disk drive. By November 1981, they’d turned what was supposed to have been a disk server like Corvus’s into a file server where users could share data. Novell’s Data Management Computer could support twelve simultaneous users at the same performance level as a single-user CP/M system.
Superset, not Novell, decided to network the new IBM PC. The three hackers bought one of the first PCs in Utah and built the first PC network card. They did it all on their own and against the wishes of Novell, which just then finally ran out of money.
The venture capitalists whose money it was that Novell had used up came to Utah looking for salvageable technology and found only Superset’s work worth continuing. While Novell was dismantled around them, the three contractors kept working and kept getting paid. They worked in isolation for two years, developing whole generations of product that were never sold to anyone.
The early versions of most software are so bad that good programmers usually want to throw them away but can’t because ship dates have to be met. But Novell wasn’t shipping anything in 1982-1983, so early versions of its network software were thrown away and started over again. Novell was able take the time needed to come up with the correct architecture, a rare luxury for a start-up, and subsequently the company’s greatest advantage. Going broke turned out to have been very good for Novell.
Novell hardware was so bad that the company concentrated almost completely on software after it started back in business in 1983. All the other networking companies were trying to sell hardware. Corvus was trying to sell hard disks. Televideo was trying to sell CP/M boxes. 3Com was trying to sell Ethernet network adapter cards. None of these companies saw any advantage to selling its software to go with another company’s hard disk, computer, or adapter card. They saw all the value in the hardware, while Novell, which had lousy hardware and knew it, decided to concentrate on networking software that would work with every hard drive, every PC, and every network card.
By this time Novell had a new leader in Ray Noorda, who’d bumped through a number of engineering, then later marketing and sales, jobs in the minicomputer business. Noorda saw that Novell’s value lay in its software. By making wiring a nonissue, with Novell’s software—now called Netware—able to run on any type of networking scheme, Noorda figured it would be possible to stimulate the next stage of growth. “Growing the market” became Noorda’s motto, and toward that end he got Novell back in the hardware business but sold workstations and network cards literally at cost just to make it cheaper and easier for companies to decide to network their offices. Ray Noorda was not a popular man in Silicon Valley.
In 1983, when Noorda was taking charge of Novell, IBM asked Microsoft to write some PC networking software. Microsoft knew very little about networking in 1983, but Bill Gates was not about to send his major customer away, so Microsoft got into the networking business.
“Our networking effort wasn’t serious until we hired Darryl Rubin, our network architect,” admitted Microsoft’s Steve Ballmer in 1991.
Wait a minute, Steve, did anyone tell IBM back in 1983 that Microsoft wasn’t really serious about this networking stuff? Of course not.
Like most of Microsoft’s other stabs at new technology, PC networking began as a preemptive strike rather than an actual product. The point of Gates’s agreeing to do IBM’s network software was to keep IBM as a customer, not to do a good product. In fact, Microsoft’s entry into most new technologies follows this same plan, with the first effort being a preemptive strike, the second effort being market research to see what customers really want in a product, and the third try is the real product. It happened that way with Microsoft’s efforts at networking, word processing, and Windows, and will continue in the company’s current efforts in multimedia and pen-based computing. It’s too bad, of course, that hundreds of thousands of customers spend millions and millions of dollars on those early efforts—the ones that aren’t real products. But heck, that’s their problem, right?
Microsoft decided to build its network technology on top of DOS because that was the company franchise. All new technologies were conceived as extensions to DOS, keeping the old technology competitive—or at least looking so—in an increasingly complex market. But DOS wasn’t a very good system on which to build a network operating system. DOS was limited to 640K of memory. DOS had an awkward file structure that got slower and slower as the number of files increased, which could become a major problem on a server with thousands of files. In contrast, Novell’s Netware could use megabytes of memory and had a lightning-fast file system. After all, Netware was built from scratch to be a network operating system, while Microsoft’s product wasn’t.
MS-Net appeared in 1985. It was licensed to more than thirty different hardware companies in the same way that MS-DOS was licensed to makers of PC clones. Only three versions of MS-Net actually appeared, including IBM’s PC LAN program, a dog.
The final nail in Microsoft’s networking coffin was also driven in 1985 when Novell introduced Netware 2.0, which ran on the 80286 processor in IBM’s PC-AT. You could run MS-Net on an AT also but only in the mode that emulated an 8086 processor and was limited to addressing 640K. But Netware on an AT took full advantage of the 80286 and could address up to 16 megabytes of RAM, making Novell’s software vastly more powerful than Microsoft’s.
This business of taking software written for the 8086 processor and porting it to the 80286 normally required completely rewriting the software by hand, often taking years of painstaking effort. It wasn’t just a matter of recompiling the software, of having a machine do the translation, because Microsoft staunchly maintained that there was no way to recompile 8086 code to run on an 80286. Bill Gates swore that such a recompile was impossible. But Drew Major of Superset didn’t know what Bill Gates knew, and so he figured out a way to recompile 8086 code to run on an 80286. What should have taken months or years of labor was finished in a week, and Novell had won the networking war. Six years and more than $100 million later, Microsoft finally admitted defeat.
Meanwhile, back in Boca Raton, IBM was still struggling to produce a follow-on to the PC-AT. The reason that it began taking IBM so long to produce new PC products was the difference between strategy and tactics. Building the original IBM PC was a tactical exercise designed to test a potential new market by getting a product out as quickly as possible. But when the new market turned out to be ten times larger than anyone at IBM had realized and began to affect the sales of other divisions of the company, PCs suddenly became a strategic issue. And strategy takes time to develop, especially at IBM.
Remember that there is nobody working at IBM today who recalls those sun-filled company picnics in Endicott, New York, back when the company was still small, the entire R&D department could participate in one three-legged race, and inertia was not yet a virtue. The folks who work at IBM today generally like the fact that it is big, slow moving, and safe. IBM has built an empire by moving deliberately and hiring third-wave people. Even Don Estridge, who led the tactical PC effort up through the PC-AT, wasn’t welcome in a strategic personal computer operation; Estridge was a second-wave guy at heart and so couldn’t be trusted. That’s why Estridge was promoted into obscurity, and Bill Lowe, who’d proved that he was a company man, a true third waver with only occasional second-wave leanings that could, and were, beaten out of him over time, was brought back to run the PC operations.
As an enormous corporation that had finally decided personal computers were part of its strategic plan, IBM laboriously reexamined the whole operation and started funding backup ventures to keep the company from being too dependent on any single PC product development effort. Several families of new computers were designed and considered, as were at least a couple of new operating systems. All of this development and deliberation takes time.
Even the vital relationship with Bill Gates was reconsidered in 1985, when IBM thought of dropping Microsoft and DOS altogether in favor of a completely new operating system. The idea was to port to the Intel 286 processor operating system software from a California company called Metaphor Computer Systems. The Metaphor software was yet another outgrowth of work done at Xerox PARC and ran then strictly on IBM mainframes, offering an advanced office automation system with a graphical user interface. The big corporate users who were daring enough to try Metaphor loved it, and IBM dreamed that converting the software to run on PCs would draw personal computers seamlessly into the mainframe world in a way that wouldn’t be so directly competitive with its other product lines. Porting Metaphor software would also have brought IBM a major role in application software for its PCs—an area where the company had so far failed.
Since Microsoft wasn’t even supposed to know that this Metaphor experiment was happening, IBM chose Lotus Development to port the software. The programmers at Lotus had never written an operating system, but they knew plenty about Intel processor architecture, since the high performance of Lotus 1-2-3 came mainly from writing directly to the processor, avoiding MS-DOS as much as possible.
Nothing ever came of the Lotus/Metaphor operating system, which turned out to be an IBM fantasy. Technically, it was asking too much of the 80286 processor. The 80386 might have handled the job, but for other strategic reasons, IBM was reluctant to move up to the 386.
IBM has had a lot of such fantasies and done a lot of negotiating and investigating whacko joint ventures with many different potential software partners. It’s a way of life at the largest computer company in the world, where keeping on top of the industry is accomplished through just this sort of diplomacy. Think of dogs sniffing each other.
IBM couldn’t go forever without replacing the PC-AT, and eventually it introduced a whole new family of microcomputers in April 1987. These were the Personal System/2s and came in four flavors: Models 30, 50, 60, and 80. The Model 30 used an 8086 processor, the Models 50 and 60 used an 80286, and the Model 80 was IBM’s first attempt at an 80386-based PC. The 286 and 386 machines used a new bus standard called the Micro Channel, and all of the PS/2s had 3.5-inch floppy disk drives. By changing hardware designs, IBM was again trying to have the market all to itself.
A new bus standard meant that circuit cards built for the IBM PC, XT, or AT models wouldn’t work in the PS/2s, but the new bus, which was 32 bits wide, was supposed to offer so much higher performance that a little more cost and inconvenience would be well worthwhile. The Micro Channel was designed by an iconoclastic (by IBM standards) engineer named Chet Heath and was reputed to beat the shit out of the old 16-bit AT bus. It was promoted as the next generation of personal computing, and IBM expected the world to switch to its Micro Channel in just the way it had switched to the AT bus in 1984.
But when we tested the PS/2s at InfoWorld, the performance wasn’t there. The new machines weren’t even as fast as many AT clones. The problem wasn’t the Micro Channel; it was IBM. Trying to come up with a clever work-around for the problem of generating a new product line every eighteen months when your organization inherently takes three years to do the job, product planners in IBM’s Entry Systems Division simply decided that the first PS/2s would use only half of the features of the Micro Channel bus. The company deliberately shipped hobbled products so that, eighteen months later, it could discover all sorts of neat additional Micro Channel horsepower, which would be presented in a whole new family of machines using what would then be called Micro Channel 2.
IBM screwed up in its approach to the Micro Channel. Had it introduced the whole product in 1987, doubling the performance of competitive hardware, buyers would have followed IBM to the new standard as they had before. They could have led the industry to a new 32-bit bus standard—one where IBM again would have had a technical advantage for a while. But instead, Big Blue held back features and then tried to scare away clone makers by threatening legal action and talking about granting licenses for the new bus only if licensees paid 5 percent royalties on both their new Micro Channel clones and on every PC, XT, or AT clone they had ever built. The only result of this new hardball attitude was that an industry that had had little success defining a new bus standard by itself was suddenly solidified against IBM. Compaq Computer led a group of nine clone makers that defined their own 32-bit bus standard in competition with the Micro Channel. Compaq led the new group, but IBM made it happen.
From IBM’s perspective, though, its approach to the Micro Channel and the PS/2s was perfectly correct since it acted to protect Big Blue’s core mainframe and minicomputer products. Until very recently, IBM concentrated more on the threat that PCs posed to its larger computers than on the opportunities to sell ever more millions of PCs. Into the late 1980s, IBM still saw itself primarily as a maker of large computers.
Along with new PS/2 hardware, IBM announced in 1987 a new operating system called OS/2, which had been under development at Microsoft when IBM was talking with Metaphor and Lotus. The good part about OS/2 was that it was a true multitasking operating system that allowed several programs to run at the same time on one computer. The bad part about OS/2 was that it was designed by IBM.
When Bill Lowe sent his lieutenants to Microsoft looking for an operating system for the IBM PC, they didn’t carry a list of specifications for the system software. They were looking for something that was ready—software they could just slap on the new machine and run. And that’s what Microsoft gave IBM in PC-DOS: an off-the-shelf operating system that would run on the new hardware. Microsoft, not IBM, decided what DOS would look like and act like. DOS was a Microsoft product, not an IBM product, and subsequent versions, though they appeared each time in the company of new IBM hardware, continued to be 100 percent Microsoft code.
OS/2 was different. OS/2 was strategic, which meant that it was too important to be left to the design whims of Microsoft alone. OS/2 would be designed by IBM and just coded by Microsoft. Big mistake.
OS/2 1.0 was designed to run on the 80286 processor. Bill Gates urged IBM to go straight for the 80386 processor as the target for OS/2, but IBM was afraid that the 386 would offer performance too close to that of its minicomputers. Why buy an AS/400 minicomputer for $200,000, when half a dozen networked PS/2 Model 80s running OS/2-386 could give twice the performance for one third the price? The only reason IBM even developed the 386-based Model 80, in fact, was that Compaq was already selling thousands of its DeskPro 386s. Over the objections of Microsoft, then, OS/2 was aimed at the 286, a chip that Gates correctly called “brain damaged.”
OS/2 had both a large address space and virtual memory. It had more graphics options than either Windows or the Macintosh, as well as being multithreaded and multitasking. OS/2 looked terrific on paper. But what the paper didn’t show was what Gates called “poor code, poor design, poor process, and other overhead” thrust on Microsoft by IBM.
While Microsoft retained the right to sell OS/2 to other computer makers, this time around IBM had its own special version of OS/2, Extended Edition, which included a database called the Data Manager, and an interface to IBM mainframes called the Communication Manager. These special extras were intended to tie OS/2 and the PS/2s into their true function as very smart mainframe terminals. IBM had much more than competing with Compaq in mind when it designed the PS/2s. IBM was aiming toward a true counterreformation in personal computing, leading millions of loyal corporate users back toward the holy mother church—the mainframe.
IBM’s dream for the PS/2s, and for OS/2, was to play a role in leading American business away from the desktop and back to big expensive computers. This was the objective of SAA—IBM’s plan to integrate its personal computers and mainframes—and of what they hoped would be SAA’s compelling application, called OfficeVision.
On May 16, 1989, I sat in an auditorium on the ground floor of the IBM building at 540 Madison Avenue. It was a rainy Tuesday morning in New York, and the room, which was filled with bright television lights as well as people, soon took on the distinctive smell of wet wool. At the front of the room stood a podium and a long table, behind which sat the usual IBM suspects—a dozen conservatively dressed, overweight, middle-aged white men.
George Conrades, IBM’s head of U.S. marketing, appeared behind the podium. Conrades, 43, was on the fast career track at IBM. He was younger than nearly all the other men of IBM who sat at the long table behind him, waiting to play their supporting roles. Behind the television camera lens, 25,000 IBM employees, suppliers, and key customers spread across the world watched the presentation by satellite.
The object of all this attention was a computer software product from IBM called OfficeVision, the result of 4,000 man-years of effort at a cost of more than a billion dollars.
To hear Conrades and the others describe it through their carefully scripted performances, OfficeVision would revolutionize American business. Its “programmable terminals” (PCs) with their immense memory and processing power would gather data from mainframe computers across the building or across the planet, seeking out data without users’ having even to know where the data were stored and then compiling them into colorful and easy-to-understand displays. OfficeVision would bring top executives for the first time into intimate -- even casual -- contact with the vital data stored in their corporate computers. Beyond the executive suite, it would offer access to data, sophisticated communication tools, and intuitive ways of viewing and using information throughout the organization. OfficeVision would even make it easier for typists to type and for file clerks to file.
In the glowing words of Conrades, OfficeVision would make American business more competitive and more profitable. If the experts were right that computing would determine the future success or failure of American business, then OfficeVision simply was that future. It would make that success.
“And all for an average of $7,600 per desk,” Conrades said, “not including the IBM mainframe computers, of course.”
The truth behind this exercise in worsted wool and public relations is that OfficeVision was not at all the future of computing but rather its past, spruced up, given a new coat of paint, and trotted out as an all-new model when, in fact, it was not new at all. In the eyes of IBM executives and their strategic partners, though, OfficeVision had the appearance of being new, which was even better. To IBM and the world of mainframe computers, danger lies in things that are truly new.
With its PS/2s and OS/2 and OfficeVision, IBM was trying to get a jump on a new wave of computing that everyone knew was on its way. The first wave of computing was the mainframe. The second wave was the minicomputer. The third wave was the PC.
Now the fourth wave -- generally called network computing -- seemed imminent, and IBM’s big-bucks commitment to SAA and to OfficeVision was its effort to make the fourth wave look as much as possible like the first three. Mainframes would do the work in big companies, minicomputers in medium-sized companies, and PCs would serve small business as well as acting as “programmable terminals” for the big boys with their OfficeVision setups.
Sadly for IBM, by 1991, OfficeVision still hadn’t appeared, having tripped over mountains of bad code, missed delivery schedules, and facing the fact of life that corporate America is only willing to invest less than 10 percent of each worker’s total compensation in computing resources for that worker. That’s why secretaries get $3,000 PCs and design engineers get $10,000 workstations. OfficeVision would have cost at least double that amount per desk, had it worked at all, so today IBM is talking about a new, slimmed-down OfficeVision 2.0, which will probably fail too.
When OS/2 1.0 finally shipped months after the PS/2 introduction, every big shot in the PC industry asked his or her market research analysts when OS/2 unit sales would surpass sales of MS-DOS. The general consensus of analysts was that the crossover would take place in the early 1990s, perhaps as soon as 1991. It didn’t happen.
Time to talk about the realities of market research in the PC industry. Market research firms make surveys of buyers and sellers, trying to predict the future. They gather and sift through millions of bytes of data and then apply their S-shaped demand curves, predicting what will and won’t be a hit. Most of what they do is voodoo. And like voodoo, whether their work is successful depends on the state of mind of their victim/customer.
Market research customers are hardware and software companies paying thousands -- sometimes hundreds of thousands -- of dollars, primarily to have their own hunches confirmed. Remember that the question on everyone’s mind was when unit sales of OS/2 would exceed those of DOS. Forget that OS/2 1.0 was late. Forget that there was no compelling application for OS/2. Forget that the operating system, when it did finally appear, was buggy as hell and probably shouldn’t have been released at all. Forget all that, and think only of the question, which was: When will unit sales of OS/2 exceed those of DOS? The assumption (and the flaw) built into this exercise is that OS/2, because it was being pushed by IBM, was destined to overtake DOS, which it hasn’t. But given that the paying customers wanted OS/2 to succeed and that the research question itself suggested that OS/2 would succeed, market research companies like Dataquest, InfoCorp, and International Data Corporation dutifully crazy-glued their usual demand curves on a chart and predicted that OS/2 would be a big hit. There were no dissenting voices. Not a single market research report that I read or read about at that time predicted that OS/2 would be a failure.
Market research firms tend to serve the same function for the PC industry that a lamppost does for a drunk.
OS/2 1.0 was a dismal failure. Sales were pitiful. Performance was pitiful, too, at least in that first version. Users didn’t need OS/2 since they could already multitask their existing DOS applications using products like Quarterdeck’s DesqView. Independent software vendors, who were attracted to OS/2 by the lure of IBM, soon stopped their OS/2 development efforts as the operating system’s failure became obvious. But the failure of OS/2 wasn’t all IBM’s fault. Half of the blame has to go on the computer memory crisis of the late 1980s.
OS/2 made it possible for PCs to access far more memory than the pitiful 640K available under MS-DOS. On a 286 machine, OS/2 could use up to 16 megabytes of memory and in fact seemed to require at least 4 megabytes to perform acceptably. Alas, this sudden need for six times the memory came at a time when American manufacturers had just abandoned the dynamic random-access memory (DRAM) business to the Japanese.
In 1975, Japan’s Ministry for International Trade and Industry had organized Japan’s leading chip makers into two groups -- NEC-Toshiba and Fujitsu-Hitachi-Mitsubishi -- to challenge the United States for the 64K DRAM business. They won. By 1985, these two groups had 90 percent of the U.S. market for DRAMs. American companies like Intel, which had started out in the DRAM business, quit making the chips because they weren’t profitable, cutting world DRAM production capacity as they retired. Then, to make matters worse, the United States Department of Commerce accused the Asian DRAM makers of dumping -- selling their memory chips in America at less than what it cost to produce them. The Japanese companies cut a deal with the United States government that restricted their DRAM distribution in America -- at a time when we had no other reliable DRAM sources. Big mistake. Memory supplies dropped just as memory demand rose, and the classic supply-demand effect was an increase in DRAM prices, which more than doubled in a few months. Toshiba, which was nearly the only company making 1 megabit DRAM chips for a while, earned more than $1 billion in profits on its DRAM business in 1989, in large part because of the United States government.
Doubled prices are a problem in any industry, but in an industry based on the idea of prices continually dropping, such an increase can lead to panic, as it did in the case of OS/2. The DRAM price bubble was just that—a bubble—but it looked for a while like the end of the world. Software developers who were already working on OS/2 projects began to wonder how many users would be willing to invest the $1,000 that it was suddenly costing to add enough memory to their systems to run OS/2. Just as raising prices killed demand for Apple’s Macintosh in the fall of 1988 (Apple’s primary reason for raising prices was the high cost of DRAM), rising memory prices killed both the supply and demand for OS/2 software.
Then Bill Gates went into seclusion for a week and came out with the sudden understanding that DOS was good for Microsoft, while OS/2 was probably bad. Annual reading weeks, when Gates stays home and reads technical reports for seven days straight and then emerges to reposition the company, are a tradition at Microsoft. Nothing is allowed to get in the way of planned reading for Chairman Bill. During one business trip to South America, for example, the head of Microsoft’s Brazilian operation tried to impress the boss by taking Gates and several women yachting for the weekend. But this particular weekend had been scheduled for reading, so Bill, who is normally very much on the make, stayed below deck reading the whole time.
Microsoft had loyally followed IBM in the direction of OS/2. But there must have been an idea nagging in the back of Bill Gates’s mind. By taking this quantum leap to OS/2, IBM was telling the world that DOS was dead. If Microsoft followed IBM too closely in this OS/2 campaign, it was risking the more than $100 million in profits generated each year by DOS -- profits that mostly didn’t come from IBM. During one of his reading weeks. Gates began to think about what he called “DOS as an asset” and in the process set Microsoft on a collision course with IBM.
Up to 1989, Microsoft followed IBM’s lead, dedicating itself publicly to OS/2 and promising versions of all its major applications that would run under the new operating system. On the surface, all was well between Microsoft and IBM. Under the surface, there were major problems with the relationship. A feisty (for IBM) band of graphics programmers at IBM’s lab in Hursley, England, first forced Microsoft to use an inferior and difficult-to-implement graphics imaging model in Presentation Manager and then later committed all the SAA operating systems, including OS/2, to using PostScript, from the hated house of Warnock— Adobe Systems.
Although by early 1990, OS/2 was up to version 1.2, which included a new file system and other improvements, more than 200 copies of DOS were still being sold for every copy of OS/2. Gates again proposed to IBM that they abandon the 286-based OS/2 product entirely in favor of a 386-based version 2.0. Instead, IBM’s Austin, Texas, lab whipped up its own OS/2 version 1.3, generally referred to as OS/2 Lite. Outwardly, OS/2 1.3 tasted great and was less filling; it ran much faster than OS/2 1.2 and required only 2 megabytes of memory. But OS/2 1.3 sacrificed subsystem performance to improve the speed of its user interface, which meant that it was not really as good a product as it appeared to be. Thrilled finally to produce some software that was well received by reviewers, IBM started talking about basing all its OS/2 products on 1.3 -- even its networking and database software, which didn’t even have user interfaces that needed optimizing. To Microsoft, which was well along on OS/2 2.0, the move seemed brain damaged, and this time they said so.
Microsoft began moving away from OS/2 in 1989 when it became clear that DOS wasn’t going away, nor was it in Microsoft’s interest for it to go away. The best solution for Microsoft would be to put a new face on DOS, and that new face would be yet another version of Windows. Windows 3.0 would include all that Microsoft had learned about graphical user interfaces from seven years of working on Macintosh applications. Windows 3.0 would also be aimed at more powerful PCs using 386 processors -- the PCs that Bill Gates expected to dominate business desktops for most of the 1990s. Windows would preserve DOS’s asset value for Microsoft and would give users 90 percent of the features of OS/2, which Gates began to see more and more as an operating system for network file servers, database servers, and other back-end network applications that were practically invisible to users.
IBM wanted to take from Microsoft the job of defining to the world what a PC operating system was. Big Blue wanted to abandon DOS in favor of OS/2 1.3, which it thought could be tied more directly into IBM hardware and applications, cutting out the clone makers in the process. Gates thought this was a bad idea that was bound to fail. He recognized, even if IBM didn’t, that the market had grown to the point where no one company could define and defend an operating system standard by itself. Without Microsoft’s help, Gates thought IBM would fail. With IBM’shelp, which Gates viewed more as meddling than assistance, Microsoft might fail. Time for a divorce.
Microsoft programmers deliberately slowed their work on OS/2 and especially on Presentation Manager, its graphical user interface. “What incentive does Microsoft have to get [OS/2-PM] out the door before Windows 3?” Gates asked two marketers from Lotus over dinner following the 1990 Computer Bowl trivia match in April 1990. “Besides, six months after Windows 3 ships it will have greater market share than PM will ever have. OS/2 applications won’t have a chance.”
Later that night over drinks, Gates speculated that IBM would “fold” in seven years, though it could last as long as ten or twelve years if it did everything right. Inevitably, though, IBM would die, and Bill Gates was determined that Microsoft would not go down too.
The loyal Lotus marketers prepared a seven-page memo about their inebriated evening with Chairman Bill, giving copies of it to their top management. Somehow I got a copy of the memo, too. And a copy eventually landed on the desk of IBM’s Jim Cannavino, who had taken over Big Blue’s PC operations from Bill Lowe. The end was near for IBM’s special relationship with Microsoft.
Over the course of several months in 1990, IBM and Microsoft negotiated an agreement leaving DOS and Windows with Microsoft and OS/2 1.3 and 2.0 with IBM. Microsoft’s only connection to OS/2 was the right to develop version 3.0, which would run on non-Intel processors and might not even share all the features of earlier versions of OS/2.
The Presentation Manager programmers in Redmond, who had been having Nerfball fights with their Windows counterparts every night for months, suddenly found themselves melded into the Windows operation. A cross-licensing agreement between the two companies remained in force, allowing IBM to offer subsequent versions of DOS to its customers and Microsoft the right to sell versions of OS/2, but the emphasis in Redmond was clearly on DOS and Windows, not OS/2.
“Our strategy for the 90′s is Windows -- one evolving architecture, a couple of implementations,” Bill Gates wrote. “Everything we do should focus on making Windows more successful.”
Windows 3.0 was introduced in May 1990 and sold more than 3 million copies in its first year. Like many other Microsoft products, this third try was finally the real thing. And since it had a head start over its competitors in developing applications that could take full advantage of Windows 3.0, Microsoft was more firmly entrenched than ever as the number one PC software company, while IBM struggled for a new identity. All those other software developers, the ones who had believed three years of Microsoft and IBM predictions that OS/2′s Presentation Manager was the way to go, quickly shifted their OS/2 programmers over to writing Windows applications.
Reprinted with permission
Photo Credit: HomeArt/Shutterstock
Seventeenth in a series. Love triangles were commonplace during the early days of the PC. Adobe, Apple and Microsoft engaged in such a relationship during the 1980s, and allegiances shifted -- oh did they. This installment of Robert X. Cringely's 1991 classic Accidental Empires shows how important is controlling a standard and getting others to adopt it.
Of the 5 billion people in the world, there are only four who I’m pretty sure have stayed consistently on the good side of Steve Jobs. Three of them -- Bill Atkinson, Rich Page, and Bud Tribble -- all worked with Jobs at Apple Computer. Atkinson and Tribble are code gods, and Page is a hardware god. Page and Tribble left Apple with Jobs in 1985 to found NeXT Inc., their follow-on computer company, where they remain in charge of hardware and software development, respectively.
So how did Atkinson, Page, and Tribble get off so easily when the rest of us have to suffer through the rhythmic pattern of being ignored, then seduced, then scourged by Jobs? Simple; among the three, they have the total brainpower of a typical Third World country, which is more than enough to make even Steve Jobs realize that he is, in comparison, a single-celled, carbon-based life form. Atkinson, Page, and Tribble have answers to questions that Jobs doesn’t even know he should ask.
The fourth person who has remained a Steve Jobs favorite is John Warnock, founder of Adobe Systems. Warnock is the father that Steve Jobs always wished for. He’s also the man who made possible the Apple LaserWriter printer and desktop publishing. He’s the man who saved the Macintosh.
Warnock, one of the world’s great programmers, has the technical ability that Jobs lacks. He has the tweedy, professorial style of a Robert Young, clearly contrasting with the blue-collar vibes of Paul Jobs, Steve’s adoptive father. Warnock has a passion, too, about just the sort of style issues that are so important to Jobs. Warnock is passionate about the way words and pictures look on a computer screen or on a printed page, and Jobs respects that passion.
Both men are similar, too, in their unwillingness to compromise. They share a disdain for customers based on their conviction that the customer can’t even imagine what they (Steve and John) know. The customer is so primitive that he or she is not even qualified to say what they need.
Welcome to the Adobe Zone.
John Warnock’s rise to programming stardom is the computer science equivalent of Lana Turner’s being discovered sitting in Schwab’s Drugstore in Hollywood. He was a star overnight.
A programmer’s life is spent implementing algorithms, which are just specific ways of getting things done in a computer program. Like chess, where you may have a Finkelstein opening or a Blumberg entrapment, most of what a programmer does is fitting other people’s algorithms to the local situation. But every good programmer has an algorithm or two that is all his or hers, and most programmers dream of that moment when they’ll see more clearly than they ever have before the answer to some incredibly complex programming problem, and their particular solution will be added to the algorithmic lore of programming. During their fifteen minutes of techno-fame, everyone who is anyone in the programming world will talk about the Clingenpeel shuffle or the Malcolm X sort.
Most programmers don’t ever get that kind of instant glory, of course, but John Warnock did. Warnock’s chance came when he was a graduate student in mathematics, working at the University of Utah computer center, writing a mainframe program to automate class registration. It was a big, dumb program, and Warnock, who like every other man in Utah had a wife and kids to support, was doing it strictly for the money.
Then Warnock’s mindless toil at the computer center was interrupted by a student who was working on a much more challenging problem. He was trying to write a graphics program to present on a video monitor an image of New York harbor as seen from the bridge of a ship. The program was supposed to run in real time, which meant that the video ship would be moving in the harbor, with the view slowly shifting as the ship changed position.
The student was stumped by the problem of how to handle the view when one object moved in front of another. Say the video ship was sailing past the Statue of Liberty, and behind the statue was the New York skyline. As the ship moved forward, the buildings on the skyline should appear to shift behind the statue, and the program would have to decide which parts of the buildings were blocked by the statue and find a way to turn off just those parts of the image, shaping the region of turned-off image to fit along the irregular profile of the statue. Put together dozens of objects at varying distances, all shifting in front of or behind each other, and just the calculation of what could and couldn’t be visible was bringing the computer to its knees.
“Why not do it this way?” Warnock asked, looking up from his class registration code and describing a way of solving the problem that had never been thought of before, a way so simple that it should have been obvious but had somehow gone unthought of by the brightest programming minds at the university. No big deal.
Except that it was a big deal. Dumbfounded by Warnock’s casual brilliance, the student told his professor, who told the department chairman, who told the university president, who must have told God (this is Utah, remember), because the next thing he knew, Warnock was giving talks all over the country, describing how he solved the hidden surface problem. The class registration program was forever forgotten.
Warnock switched his Ph.D. studies from mathematics to computer science, where the action was, and was soon one of the world’s experts on computer graphics.
Computer graphics, the drawing of pictures on-screen and on-page, is very difficult stuff. It’s no accident that more than 80 percent of each human brain is devoted to processing visual data. Looking at a picture and deciding what it portrays is a major effort for humans, and often an impossible one for computers.
Jump back to that image of New York harbor, which was to be part of a ship’s pilot training simulator ordered by the U.S. Maritime Academy. How do you store a three-dimensional picture of New York harbor inside a computer? One way would be to put a video camera in each window of a real ship and then sail that ship everywhere in the harbor to capture a video record of every vista. This would take months, of course, and it wouldn’t take into account changing weather or other ships moving around the harbor, but it would be a start. All the video images could then be digitized and stored in the computer. Deciding what view to display through each video window on the simulator would be just a matter of determining where the ship was supposed to be in the harbor and what direction it was facing, and then finding the appropriate video scene and displaying it. Easy, eh? But how much data storage would it require?
Taking the low-buck route, we’ll require that the view only be in typical PC resolution of 640-by-400 picture elements (pixels), which means that each stored screen will hold 256,000 pixels.
Since this is 8-bit color (8 bits per pixel), that means we’ll need 256,000 bytes of storage (8 bits make 1 byte) for each screen image. Accepting a certain jerkiness of apparent motion, we’ll need to capture images for the video database every ten feet, and at each of those points we’ll have to take a picture in at least eight different directions. That means that for every point in the harbor, we’ll need 2,048,000 bytes of storage. Still not too bad, but how many such picture points are there in New York harbor if we space them every ten feet? The harbor covers about 100 square miles, which works out to 27,878,400 points. So we’ll need just over 57 billion bytes of storage to represent New York harbor in this manner. Twenty years ago, when this exercise was going on in Utah, there was no computer storage system that could hold 57 billion bytes of data or even 5.7 billion bytes. It was impossible. And the system would have been terrifically limited in other ways, too. What would the view be like from the top of the Statue of Liberty? Don’t know. With all the data gathered at sea level, there is no way of knowing how the view would look from a higher altitude.
The problem with this type of computer graphics system is that all we are doing is storing and calling up bits of data rather than twiddling them, as we should do. Computers are best used for processing data, not just retrieving them. That’s how Warnock and his buddies in Utah solved the data storage problem in their model of New York harbor. Rather than take pictures of the whole harbor, they described it to the computer.
Most of New York harbor is empty water. Water is generally flat with a few small waves, it’s blue, and it lives its life at sea level. There I just described most of New York harbor in eighteen words, saving us at least 50 billion bytes of storage. What we’re building here is an imaging model, and it assumes that the default appearance of New York harbor is wet. Where it’s not wet—where there are piers or buildings or islands—I can describe those, too, by telling the computer what the object looks like and where it is positioned in space. What I’m actually doing is telling the computer how to draw a picture of the object, specifying characteristics like size, shape, and color. And if I’ve already described a tugboat, for example, and there are dozens of tugboats in the harbor that look alike, the next time I need to describe one I can just refer back to the earlier description, saying to draw another tugboat and another and another, with no additional storage required.
This is the stuff that John Warnock thought about in Utah and later at Xerox PARC, where he and Martin Newell wrote a language they called JaM, for John and Martin. JaM provided a vocabulary for describing objects and positioning them in a three-dimensional database. JaM evolved into another language called Interpress, which was used to describe words and pictures to Xerox laser printers. When Warnock was on his own, after leaving Xerox, Interpress evolved into a language called PostScript. JaM, Interpress, and PostScript are really the same language, in fact, but for reasons having to do with copyrights and millions of dollars, we pretend that they are different.
In PostScript, the language we’ll be talking about from now on, there is no difference between a tugboat or the letter E. That is, PostScript can be used to draw pictures of tugboats and pictures of the letter E, and to the PostScript language each is just a picture. There is no cultural or linguistic symbolism attached to the letter, which is, after all, just a group of straight and curved lines filled in with color.
PostScript describes letters and numbers as mathematical formulas rather than as bit maps, which are just patterns of tiny dots on a page or screen. PostScript popularized the outline font, where a description of each letter is stored as a formula for lines and bezier curves and recipes for which parts of the character are to be filled with color and which parts are not. Outline fonts, because they are based on mathematical descriptions of each letter, are resolution independent; they can be scaled up or down in size and printed in as fine detail as the printer or typesetter is capable of producing. And like the image of a tugboat, which increases in detail as it sails closer, PostScript outline fonts contain “hints” that control how much detail is given up as type sizes get smaller, making smaller type sizes more readable than they otherwise would be.
Before outline fonts can be printed, they have to be rasterized, which means that a description of which bits to print where on the page has to be generated. Before there were outline fonts, bit-mapped fonts were all there were, and they were generated in a few specific sizes by people called fontographers, not computers. But with PostScript and outline fonts, it’s as easy to generate a 10.5-point letter as the usual 10-, 12-, or 14-point versions.
Warnock and his boss at Xerox, Chuck Geschke, tried for two years to get Xerox to turn Interpress into a commercial product. Then they decided to start their own company with the idea of building the most powerful printer in history, to which people would bring their work to be beautifully printed. Just as Big Blue imagined there was a market for only fifty IBM 650 mainframes, the two ex-Xerox guys thought the world needed only a few PostScript printers.
Warnock and Geschke soon learned that venture capitalists don’t like to fund service businesses, so they next looked into creating a computer workstation with custom document preparation software that could be hooked into laser printers and typesetters, to be sold to typesetting firms and the printing departments of major corporations. Three months into that business, they discovered at least four competitors were already underway with similar plans and more money. They changed course yet again and became sellers of graphics systems software to computer companies, designers of printer controllers featuring their PostScript language, and the first seller of PostScript fonts.
Adobe Systems was named after the creek that ran past Warnock’s garden in Los Altos, California. The new company defined the PostScript language and then began designing printer controllers that could interpret PostScript commands, rasterize the image, and direct a laser engine to print it on page. That’s about the time that Steve Jobs came along.
The usual rule is that hardware has to exist before programmers will write software to run on it. There are a few exceptions to this rule, and one of these is PostScript, which is very advanced, very complex software that still doesn’t run very fast on today’s personal computers. PostScript was an order of magnitude more complex than most personal computer software of the mid-1980s. Tim Paterson’s Quick and Dirty Operating System was written in less than six months. Jonathan Sachs did 1-2-3 in a year. Paul Allen and Bill Gates pulled together Microsoft BASIC in six weeks. Even Andy Hertzfeld put less than two years into writing the system software for Macintosh. But PostScript took twenty man-years to perfect. It was the most advanced software ever to run on a personal computer, and few microcomputers were up to the task.
The mainframe world, with its greater computing horsepower, might logically have embraced PostScript printers, so the fact that the personal computer was where PostScript made its mark is amazing, and is yet another testament to Steve Jobs’s will.
The 128K Macintosh was a failure. It was an amazing design exercise that sat on a desk and did next to nothing, so not many people bought early Macs. The mood in Cupertino back in 1984 was gloomy. The Apple III, the Lisa, and now the Macintosh were all failures. The Apple II division was being ignored, the Lisa division was deliberately destroyed in a fit of Jobsian pique, and the Macintosh division was exhausted and depressed.
Apple had $250 million sunk in the ground before it started making money on the Macintosh. Not even the enthusiasm of Steve Jobs could make the world see a 128K Mac with a floppy disk drive, two applications, and a dot-matrix printer as a viable business computer system.
Apple employees may drink poisoned Kool-Aid, but Apple customers don’t.
It was soon evident, even to Jobs, that the Macintosh needed a memory boost and a compelling application if it was going to succeed. The memory boost was easy, since Apple engineers had secretly included the ability to expand memory from 128K to 512K, in direct defiance of orders from Jobs. Coming up with the compelling application was harder; it demanded patience, which was never seen as a virtue at Apple.
The application so useful that it compels people to buy a specific computer doesn’t have to be a spreadsheet, though that’s what it turned out to be for the Apple II and the IBM PC. Jobs and Sculley thought it would be a spreadsheet, too, that would spur sales of the Mac. They had high hopes for Lotus Jazz, which turned up too late and too slow to be a major factor in the market. There was, as always, a version of Microsoft’s Multiplan for the Mac, but that didn’t take off in the market either, primarily because the Mac, with its small screen and relatively high price, didn’t offer a superior environment for spreadsheet users. For running spreadsheets, at least, PCs were cheaper and had bigger screens, which was all that really mattered.
For the Lisa, Apple had developed its own applications, figuring that the public would latch onto one of the seven as the compelling application. But while the Macintosh came with two bundled applications of its own -- MacWrite and MacPaint -- Jobs wanted to do things in as un-Lisa-like manner as possible, which meant that the compelling application would have to come from outside Apple.
Mike Boich was put in charge of what became Apple’s Macintosh evangelism program. Evangelists like Alain Rossmann and Guy Kawasaki were sent out to bring the word of Macintosh to independent software developers, giving them free computers and technical support. They hoped that these efforts would produce the critical mass of applications needed for the Mac to survive and at least one compelling application that was needed for the Mac to succeed.
There are lots of different personal computers in the world, and they all need software. But little software companies, which describes about 90 percent of the personal computer software companies around, can’t afford to make too many mistakes by developing applications for computers that fail in the marketplace. At Electronic Arts, Trip Hawkins claims to have been approached to develop software for sixty different computer types over six or seven years. Hawkins took a chance on eighteen of those systems, while most companies pick only one or two.
When considering whether to develop for a different computer platform, software companies are swayed by an installed base -- the number of computers of a given type that are already working in the world -- by money, and by fear of being left behind technically. Boich, Rossmann, and Kawasaki had no installed base of Macintoshes to point to. They couldn’t claim that there were a million or 10 million Macintoshes in the world, with owners eager to buy new and innovative applications. And they didn’t have money to pay developers to do Mac applications -- something that Hewlett-Packard and IBM had done in the past.
The pitch that worked for the Apple evangelists was to cultivate the developers’ fear of falling behind technically. “Graphical user interfaces are the future of computing,” they’d say, “and this is the best graphical user interface on the market right now. If you aren’t developing for the Macintosh, five years from now your company won’t be in business, no matter what graphical platform is dominant then.”
The argument worked, and 350 Macintosh applications were soon under development. But Apple still needed new technology that would set the Mac apart from its graphical competitors. The Lisa and the Xerox Star had not been ignored by Apple’s competitors, and a number of other graphical computing environments were announced in 1983, even before the Macintosh shipped.
VisiCorp was betting (and losing) its corporate existence on a proprietary graphical user interface and software for IBM PCs and clones called VisiOn. VisiOn appeared in November 1983, more than a year after it was announced. With VisiOn, you got a mouse, a special circuit card that was installed inside the PC, and software including three applications -- word processing, spreadsheet, and graphics. VisiOn offered no color, no icons, and it was slow -- all for a list price of $1,795. The shipping version was supposed to have been twelve times faster than the demo; it wasn’t. Developers hated VisiOn because they had, to pay a big up-front fee to get the information needed to write programs (literally anti-evangelism) and then had to buy time on a Prime minicomputer, the only computer environment in which applications could be developed. VisiOn was a dud, but until it was actually out, failing in the world, it had a lot of people scared.
One person who was definitely scared by VisiOn was Bill Gates of Microsoft, who stood transfixed through three complete VisiOn demonstrations at the Comdex computer trade show in 1982. Gates had Charles Simonyi fly down from Seattle just to see the VisiOn demo, then Gates immediately went back to Bellevue and started his own project to throw a graphical user interface on top of DOS. This was the Interface Manager, later called Microsoft Windows, which was announced in 1983 and shipped in 1985. Windows was slow, too, and there weren’t very many applications that supported the environment, but it fulfilled Gates’ goal, which was not to be the best graphical environment around, but simply to defend the DOS franchise. If the world wanted a graphical user interface, Gates would add one to DOS. If they want a pen-based interface, he’ll add one to DOS (it’s called Windows for Pen Computing). If the world wants voice recognition, or multimedia, or fingerpainting input, Gates will add it to DOS, because DOS, and the regular income it provides, year after year, funds everything else at Microsoft. DOS is Microsoft.
Gates did Windows as a preemptive strike against VisiOn, and he developed Microsoft applications for the Macintosh, because it was clear that Windows would not be good enough to stop the Mac from becoming a success. Since he couldn’t beat the Macintosh, Gates supported it, and in turn gained knowledge of graphical environments. He also made an agreement with Apple allowing him to use certain Macintosh features in Windows, an agreement that later landed both companies in court.
Finally, there was GEM, another graphical environment for the IBM PC, which appeared from Gary Kildall’s Digital Research, also in 1983. GEM is still out there, in fact, but the only GEM application of note is Ventura Publisher, a popular desktop publishing package for the IBM world, ironically sold by Xerox. Most Ventura users don’t even know they are using GEM.
Apple needed an edge against all these would-be competitors, and that edge was the laser printer. Hewlett-Packard introduced its LaserJet printer in 1984, setting a new standard for PC printing, but Steve Jobs wanted something much, much better, and when he saw the work that Warnock and Geschke were doing at Adobe, he knew they could give him the sort of printer he wanted. HP’s LaserJet output looked as if it came from a typewriter, while Jobs was determined that his LaserWriter output would look like it came from a typesetter.
Jobs used $2.5 million to buy 15 percent of Adobe, an extravagant move that was wildly unpopular among Apple’s top management, who generally gave up the money for lost and moved to keep Jobs from making other such investments in the future. Apple’s investment in Adobe was far from lost though. It eventually generated more than $10 billion in sales for Apple, and the stock was sold six years later for $89 million. Still, in 1984, conventional wisdom said the Adobe investment looked like a bad move.
The Apple LaserWriter used the same laser print mechanism that HP’s LaserJet did. It also used a special controller card that placed inside the printer what was then Apple’s most powerful computer; the printer itself was a computer. Adobe designed a printer controller for the LaserWriter, and Apple designed one too. Jobs arrogantly claimed that nobody—not even Adobe—could engineer as well as Apple, so he chose to use the Apple-designed controller. For many years, this was the only non-Adobe-designed PostScript controller on the market. The first generation of competitive PostScript printers from other companies all used the rejected Adobe controller and were substantially faster as a result.
The LaserWriter cost $7,000, too much for a printer that would be available to only a single microcomputer. Jobs, who still didn’t think that workers needed umbilical cords to their companies, saw the logic in at least having an umbilical cord to the LaserWriter, and so AppleTalk was born. AppleTalk was clever software that worked with the Zilog chip that controlled the Macintosh serial port, turning it into a medium-speed network connection. AppleTalk allowed up to thirty-two Macs to share a single LaserWriter.
At the same time that he was ordering AppleTalk, Jobs still didn’t understand the need to link computers together to share information. This antinetwork bias, which was based on his concept of the lone computist -- a digital Clint Eastwood character who, like Jobs, thought he needed nobody else -- persisted even years later when the NeXT computer system was introduced in 1988. Though the NeXT had built-in Ethernet networking, Jobs was still insisting that the proper use of his computer was to transfer data on a removable disk. He felt so strongly about this that for the first year, he refused orders for NeXT computers that were specifically configured to store data for other computers on the network. That would have been an impure use of his machine.
Adobe Systems rode fonts and printer software to more than $100 million in annual sales. By the time they reach that sales level, most software companies are being run by marketers rather than by programmers. The only two exceptions to this rule that I know of are Microsoft and Adobe -- companies that are more alike than their founders would like to believe.
Both Microsoft and Adobe think they are following the organizational model devised by Bob Taylor at Xerox PARC. But where Microsoft has a balkanized version of the Taylor model, got second-hand through Charles Simonyi, Warnock and Geschke got their inspiration directly from the master himself. Adobe is the closest a commercial software company can come to following Taylor’s organizational model and still make a profit.
The problem, of course, is that Bob Taylor’s model isn’t a very good one for making products or profits -- it was never intended to be -- and Adobe has been able to do both only through extraordinary acts of will.
As it was at PARC, what matters at Adobe is technology, not marketing. The people who matter are programmers, not marketers. Ideologically correct technology is more important than making money—a philosophy that clearly differentiates Adobe from Microsoft, where making money is the prime directive.
John Warnock looks at Microsoft and sees only shoddy technology. Bill Gates looks at Adobe and sees PostScript monks who are ignoring the real world -- the world controlled by Bill Gates. And it’s true; the people of Adobe see PostScript as a religion and hate Gates because he doesn’t buy into that religion.
There is a part of John Warnock that would like to have the same fatherly relationship with Bill Gates that he already has with Steve Jobs. But their values are too far apart, and, unlike Steve, Bill already has a father.
Being technologically correct is more important to Adobe than pleasing customers. In fact, pleasing customers is relatively unimportant. Early in 1985, for example, representatives from Apple came to ask Adobe’s help in making the Macintosh’s bitmapped fonts print faster. These were programmers from Adobe’s largest customer who had swallowed their pride to ask for help. Adobe said, “No.”
“They wanted to dump screens [to the printer] faster, and they wanted Apple-specific features added to the printer,” Warnock explained to me years later. “Apple came to me and said, ‘We want you to extend PostScript in a way that is proprietary to Apple.’ I had to say no. What they asked would have destroyed the value of the PostScript standard in the long term.”
If a customer that represented 75 percent of my income asked me to walk his dog, wash her car, teach their kids to read, or to help find a faster way to print bit-mapped fonts, I’d do it, even if it meant adding a couple proprietary features to PostScript, which already had lots of proprietary features -- proprietary to Adobe.
The scene with Apple was quickly forgotten, because putting bad experiences out of mind is the Adobe way. Adobe is like a family that pretends grandpa isn’t an alcoholic. Unlike Microsoft, with its screaming and willingness to occasionally ship schlock code, all that matters at Adobe is great technology and the appearance of calm.
A Stanford M.B.A. was hired to work as Adobe’s first evangelist, trying to get independent software developers to write PostScript applications. Technical evangelism usually means going on the road -- making contacts, distributing information, pushing the product. Adobe’s evangelist went more than a year without leaving the building on business. He spent his days up in the lab, playing with the programmers. His definition of evangelism was waiting for potential developers to call him, if they knew he existed at all. What’s amazing about this story is that this nonevangelist came under no criticism for his behavior. Nobody said a thing.
Nobody said anything, too, when a technical support worker occasionally appeared at work wearing a skirt. Nobody said, “Interesting skirt, Glenn.” Nobody said anything.
Some folks from Adobe came to visit InfoWorld one afternoon, and I asked about Display PostScript, a product that had been developed to bring PostScript fonts and graphics to Macintosh screens. Display PostScript had been licensed to Aldus for a new version of its PageMaker desktop publishing program called PageMaker Pro. But at the last minute, after the product was finished and the deal with Aldus was signed, Adobe decided that it didn’t want to do Display PostScript for the Macintosh after all. They took the product back, and scrambled hard to get Aldus to cancel PageMaker Pro, too. I wanted to know why they withdrew the product.
The product marketing manager for PostScript, the person whose sole function was to think about how to get people to buy more PostScript, claimed to have never heard of Display PostScript for the Mac or of PageMaker Pro. He looked bewildered.
“That was before you joined the company,” explained Steve MacDonald, an Adobe vice-president who was leading the group. ”You don’t tell new marketing people the history of their own products?” I asked, incredulous. “Or is it just the mistakes you don’t tell them about?”
MacDonald shrugged.
For all its apparent disdain for money, Adobe has an incredible ability to wring the stuff out of customers. In 1989, for example, every Adobe programmer, marketing executive, receptionist, and shipping clerk represented $357,000 in sales and $142,000 in profit. Adobe has the highest profit margins and the greatest sales per employee of any major computer hardware or software company, but such performance comes at a cost. Under the continual prodding of the company’s first chairman, a venture capitalist named Q. T. Wiles, Adobe worked hard to maximize earnings per share, which boosted the stock price. Warnock and Geschke, who didn’t know any better, did as Q. T. told them to.
Q. T. is gone now, his Adobe shares sold, but the company is trapped by its own profitability. Earnings per share are supposed to only rise at successful companies. If you earned a dollar per share last year, you had better earn $1.20 per share this year. But Adobe, where 400 people are responsible for more than $150 million in sales, was stretched thin from the start. The only way that the company could keep its earnings going ever upward was to get more work out of the same employees, which means that the couple of dozen programmers who work most of the technical miracles are under terrific pressure to produce.
This pressure to produce first became a problem when Warnock decided to do Adobe Illustrator, a PostScript drawing program for the Macintosh. Adobe’s customers to that point were companies like Apple and IBM, but Illustrator was meant to be sold to you and me, which meant that Adobe suddenly needed distributors, dealers, printers for manuals, duplicators for floppy disks—things that weren’t at all necessary when serving customers meant sending a reel of computer tape over to Cupertino in exchange for a few million dollars, thank you. But John Warnock wanted the world to have a PostScript drawing tool, and so the world would have a PostScript drawing tool. A brilliant programmer named Mike Schuster was pulled away from the company’s system software business to write the application as Warnock envisioned it.
In the retail software business, you introduce a product and then immediately start doing revisions to stay current with technology and fix bugs. John Warnock didn’t know this. Adobe Illustrator appeared in 1986, and Schuster was sent to work on other things. They should have kept someone working on Illustrator, improving it and fixing bugs, but there just wasn’t enough spare programmer power to allow that. A version of Illustrator for the IBM PC followed that was so bad it came to be called the “landfill version” inside the company. PC Illustrator should have been revised instantly, but wasn’t.
When Adobe finally got around to sprucing up the Macintosh version of Illustrator, they cleverly called the new version Illustrator 88, because it appeared in 1988. You could still buy Illustrator 88 in 1989, though. And in 1990. And even into 1991, when it was finally replaced by Illustrator 3.0. Adobe is not a marketing company.
In 1988, Bill Gates asked John Warnock for PostScript code and fonts to be included with the next version of Windows. With Adobe’s help users would be able to see the same beautiful printing on-screen that they could print on a PostScript printer. Gates, who never pays for anything if he can avoid it, wanted the code for free. He argued that giving PostScript code to Microsoft would lead to a dramatic increase in Adobe’s business selling fonts, and Adobe would benefit overall. Warnock said, “No.”
In September 1989, Apple Computer and Microsoft announced a strategic alliance against Adobe. As far as both companies were concerned, John Warnock had said “No” twice too often. Apple was giving Microsoft its software for building fonts in exchange for use of a PostScript clone that Microsoft had bought from a developer named Cal Bauer.
Forty million Apple dollars were going to Adobe each year, and clever Apple programmers, who still remembered being rejected by Adobe in 1985, were arguing that it would be cheaper to roll their own printing technology than to continue buying Adobe’s.
In mid-April, news had reached Adobe that Apple would soon announce the phasing out of PostScript in favor of its own code, to be included in the upcoming release of new Macintosh control software called System 7.0. A way had to be found fast to counter Apple’s strategy or change it.
Only a few weeks after learning Apple’s decision -- and before anything had been announced by Apple or Microsoft -- Adobe Type Manager, or ATM, was announced -- software that would bring Adobe fonts directly to Macintosh screens without the assistance of Apple since it would be sold directly to users. ATM, which would work only with fonts -- with words rather than pictures -- was replacing Display PostScript, which Adobe had already tried (and failed) to sell to Apple. ATM had the advantage over Apple’s System 7.0 software that it would work with older Macintoshes. Adobe’s underlying hope was that quick market acceptance of ATM would dissuade Apple from even setting out on its separate course.
But Apple made its announcement anyway, sold all its Adobe shares, and joined forces with Microsoft to destroy its former ally. Adobe’s threat to both Apple and Microsoft was so great that the two companies conveniently ignored their own yearlong court battle over the vestiges of an earlier agreement allowing Microsoft to use the look and feel of Apple’s Macintosh computer in Microsoft Windows.
Apple-Microsoft and Apple-Adobe are examples of strategic alliances as they are conducted in the personal computer industry. Like bears mating or teenage romances, strategic alliances are important but fleeting.
Apple chose to be associated with Adobe only as long as the relationship worked to Apple’s advantage. No sticking with old friends through thick and thin here.
For Microsoft, fonts and printing technology had been of little interest, since Gates saw as important what happened inside the box, not inside the printer. Then IBM decided it wanted the same fonts in both its computers and printers, only to discover that Microsoft, its traditional software development partner, had no font technology to offer. So IBM began working with Adobe and listening to the ideas of John Warnock.
If IBM is God in the PC universe then Bill Gates is the pope. Warnock, now talking directly with IBM, was both a heretic and a threat to Gates. Warnock claimed that Gates was not a good servant of God, that Microsoft’s technology was inferior. Worse, Warnock was correct, and Gates knew it. Control of the universe in the box was at stake.
Warnock and Adobe had to die, Gates decided, and if it took an unholy alliance with Apple and a temporary putting aside of legal conflicts between Microsoft and Apple to kill Adobe, then so be it.
This passion play of Adobe, Apple, and Microsoft could have taken place between companies in many industries, but what sets the personal computer industry apart is that the products in question -- Adobe Type Manager and Apple’s System 7.0 -- did not even exist.
Battles of midsized cars or two-ply toilet tissue take place on showroom floors and supermarket shelves, but in the personal computer industry, deals are cut and share prices fluctuate on the supposed attributes of products that have yet to be written or even fully designed. Apple’s offensive against Adobe was based on revealing the ongoing development of software that users could not expect to purchase for at least a year (two years, it turned out); Adobe’s response was a program that would take months to develop.
ATM was announced, then developed, essentially by a single programmer who used to joke with the Adobe marketing manager about whether the product or its introduction would be done first.
Both companies were dueling with intentions, backed up by the conviction of some computer hacker that given enough time and junk food, he could eventually write software that looked pretty much like what had just been announced with such fanfare.
As I said, computer graphics software is very hard to do well. By the middle of 1991, Apple and Adobe had made friends again, in part because Microsoft had not been able to fulfill its part of the deal with Apple. “Our entry into the printer software business has not succeeded,” Bill Gates wrote in a memo to his top managers. “Offering a cheap PostScript clone turned out to not only be very hard but completely irrelevant to helping our other problems. We overestimated the threat of Adobe as a competitor and ended up making them an ‘enemy,’ while we hurt our relationship with Hewlett-Packard …”
Overestimated the threat of Adobe as a competitor? In a way it’s true, because the computer world is moving on to other issues, leaving Adobe behind. Adobe makes more money than ever in its PostScript backwater, but is not wresting the operating system business from Microsoft, as both companies had expected.
With its reliance on only a few very good programmers. Adobe was forced to defend its existing businesses at the cost of its future. John Warnock is still a better programmer than Bill Gates, but he’ll never be as savvy.
Reprinted with permission
Photo Credit: NinaMalyna/Shutterstock
Sixteenth in a series. Robert X. Cringely's tome Accidental Empires takes on a startling prescient tone in this next installment. Remember as you read that the book published in 1991. Much he writes here about Apple cofounder Steve Jobs is remarkably insightful from the context of looking back. Some portions foreshadow the future -- or one possible outcome -- when looking at Apple following Jobs' ouster in 1985 and the company now following his death.
The most dangerous man in Silicon Valley sits alone on many weekday mornings, drinking coffee at II Fornaio, an Italian restaurant on Cowper Street in Palo Alto. He’s not the richest guy around or the smartest, but under a haircut that looks as if someone put a bowl on his head and trimmed around the edges, Steve Jobs holds an idea that keeps some grown men and women of the Valley awake at night. Unlike these insomniacs, Jobs isn’t in this business for the money, and that’s what makes him dangerous.
I wish, sometimes, that I could say this personal computer stuff is just a matter of hard-headed business, but that would in no way account for the phenomenon of Steve Jobs. Co-founder of Apple Computer and founder of NeXT Inc., Jobs has literally forced the personal computer industry to follow his direction for fifteen years, a direction based not on business or intellectual principles but on a combination of technical vision and ego gratification in which both business and technical acumen played only small parts.
Steve Jobs sees the personal computer as his tool for changing the world. I know that sounds a lot like Bill Gates, but it’s really very different. Gates sees the personal computer as a tool for transferring every stray dollar, deutsche mark, and kopeck in the world into his pocket. Gates doesn’t really give a damn how people interact with their computers as long as they pay up. Jobs gives a damn. He wants to tell the world how to compute, to set the style for computing.
Bill Gates has no style; Steve Jobs has nothing but style.
A friend once suggested that Gates switch to Armani suits from his regular plaid shirt and Levis Dockers look. “I can’t do that,” Bill replied. “Steve Jobs wears Armani suits.”
Think of Bill Gates as the emir of Kuwait and Steve Jobs as Saddam Hussein.
Like the emir, Gates wants to run his particular subculture with an iron hand, dispensing flawed justice as he sees fit and generally keeping the bucks flowing in, not out. Jobs wants to control the world. He doesn’t care about maintaining a strategic advantage; he wants to attack, to bring death to the infidels. We’re talking rivers of blood here. We’re talking martyrs. Jobs doesn’t care if there are a dozen companies or a hundred companies opposing him. He doesn’t care what the odds are against success. Like Saddam, he doesn’t even care how much his losses are. Nor does he even have to win, if, by losing the mother of all battles he can maintain his peculiar form of conviction, still stand before an adoring crowd of nerds, symbolically firing his 9 mm automatic into the air, telling the victors that they are still full of shit.
You guessed it. By the usual standards of Silicon Valley CEOs, where job satisfaction is measured in dollars, and an opulent retirement by age 40 is the goal, Steve Jobs is crazy.
Apple Computer was always different. The company tried hard from the beginning to shake the hobbyist image, replacing it with the idea that the Apple II was an appliance but not just any appliance; it was the next great appliance, a Cuisinart for the mind. Apple had the five-color logo and the first celebrity spokesperson: Dick Cavett, the thinking person’s talk show host.
Alone among the microcomputer makers of the 1970s, the people of Apple saw themselves as not just making boxes or making money; they thought of themselves as changing the world.
Atari wasn’t changing the world; it was in the entertainment business. Commodore wasn’t changing the world; it was just trying to escape from the falling profit margins of the calculator market while running a stock scam along the way. Radio Shack wasn’t changing the world; it was just trying to find a new consumer wave to ride, following the end of the CB radio boom. Even IBM, which already controlled the world, had no aspirations to change it, just to wrest some extra money from a small part of the world that it had previously ignored.
In contrast to the hardscrabble start-ups that were trying to eke out a living selling to hobbyists and experimenters, Apple was appealing to doctors, lawyers, and middle managers in large corporations by advertising on radio and in full pages of Scientific American. Apple took a heroic approach to selling the personal computer and, by doing so, taught all the others how it should be done.
They were heroes, those Apple folk, and saw themselves that way. They were more than a computer company. In fact, to figure out what was going on in the upper echelons in those Apple II days, think of it not as a computer company at all but as an episode of “Bonanza.”
(Theme music, please.)
Riding straight off the Ponderosa’s high country range every Sunday night at nine was Ben Cartwright, the wise and supportive father, who was willing to wield his immense power if needed. At Apple, the part of Ben was played by Mike Markkula.
Adam Cartwright, the eldest and best-educated son, who was sophisticated, cynical, and bossy, was played by Mike Scott. Hoss Cartwright, a good-natured guy who was capable of amazing feats of strength but only when pushed along by the others, was played by Steve Wozniak. Finally, Little Joe Cartwright, the baby of the family who was quick with his mouth, quick with his gun, but was never taken as seriously as he wanted to be by the rest of the family, was played by young Steve Jobs.
The series was stacked against Little Joe. Adam would always be older and more experienced. Hoss would always be stronger. Ben would always have the final word. Coming from this environment, it was hard for a Little Joe character to grow in his own right, short of waiting for the others to die. Steve Jobs didn’t like to wait.
By the late 1970s, Apple was scattered across a dozen one- and two-story buildings just off the freeway in Cupertino, California. The company had grown to the point where, for the first time, employees didn’t all know each other on sight. Maybe that kid in the KOME T-shin who was poring over the main circuit board of Apple’s next computer was a new engineer, a manufacturing guy, a marketer, or maybe he wasn’t any of those things and had just wandered in for a look around. It had happened before. Worse, maybe he was a spy for the other guys, which at that time didn’t mean IBM or Compaq but more likely meant the start-up down the street that was furiously working on its own microcomputer, which its designers were sure would soon make the world forget that there ever was a company called Apple.
Facing these realities of growth and competition, the grownups at Apple -- Mike Markkula, chairman, and Mike Scott, president --decided that ID badges were in order. The badges included a name and an individual employee number, the latter based on the order in which workers joined the company. Steve Wozniak was declared employee number 1, Steve Jobs was number 2, and so on.
Jobs didn’t want to be employee number 2. He didn’t want to be second in anything. Jobs argued that he, rather than Woz, should have the sacred number 1 since they were co-founders of the company and J came before W in the alphabet. It was a kid’s argument, but then Jobs, who was still in his early twenties, was a kid. When that plan was rejected, he argued that the number 0 was still unassigned, and since 0 came before 1, Jobs would be happy to take that number. He got it.
Steve Wozniak deserved to be considered Apple’s number 1 employee. From a technical standpoint, Woz literally was Apple Computer. He designed the Apple II and wrote most of its system software and its first BASIC interpreter. With the exception of the computer’s switching power supply and molded plastic case, literally every other major component in the Apple II was a product of Wozniak’s mind and hand.
And in many ways, Woz was even Apple’s conscience. When the company was up and running and it became evident that some early employees had been treated more fairly than others in the distribution of stock, it was Wozniak who played the peacemaker, selling cheaply 80,000 of his own Apple shares to employees who felt cheated and even to those who just wanted to make money at Woz’s expense.
Steve Jobs’s roles in the development of the Apple II were those of purchasing agent, technical gadfly, and supersalesman. He nagged Woz into a brilliant design performance and then took Woz’s box to the world, where through sheer force of will, this kid with long hair and a scraggly beard imprinted his enthusiasm for the Apple II on thousands of would-be users met at computer shows. But for all Jobs did to sell the world on the idea of buying a microcomputer, the Apple II would always be Wozniak’s machine, a fact that might have galled employee number 0, had he allowed it to. But with the huckster’s eternal optimism, Jobs was always looking ahead to the next technical advance, the next computer, determined that that machine would be all his.
Jobs finally got the chance to overtake his friend when Woz was hurt in the February 1981 crash of his Beechcraft Bonanza after an engine failure taking off from the Scotts Valley airport. With facial injuries and a case of temporary amnesia, Woz was away from Apple for more than two years, during which he returned to Berkeley to finish his undergraduate degree and produced two rock festivals that lost a combined total of nearly $25 million, proving that not everything Steve Wozniak touched turned to gold.
Another break for Jobs came two months after Woz’s airplane crash, when Mike Scott was forced out as Apple president, a victim of his own ruthless drive that had built Apple into a $300 million company. Scott was dogmatic. He did stupid things like issuing edicts against holding conversations in aisles or while standing. Scott was brusque and demanding with employees (“Are you working your ass off?” he’d ask, glaring over an office cubicle partition). And when Apple had its first-ever round of layoffs, Scott handled them brutally, pushing so hard to keep momentum going that he denied the company a chance to mourn its loss of innocence.
Scott was a kind of clumsy parent who tried hard, sometimes too hard, and often did the wrong things for the right reasons. He was not well suited to lead the $1 billion company that Apple would soon be.
Scott had carefully thwarted the ambitions of Steve Jobs. Although Jobs owned 10 percent of Apple, outside of purchasing (where Scott still insisted on signing the purchase orders, even if Jobs negotiated the terms), he had little authority.
Mike Markkula fired Scott, sending the ex-president into a months-long depression. And it was Markkula who took over as president when Scott left, while Jobs slid into Markkula’s old job as chairman. Markkula, who’d already retired once before, from Intel, didn’t really want the president’s job and in fact had been trying to remove himself from day-to-day management responsibility at Apple. As a president with retirement plans, Markkula was easier-going than Scott had been and looked much more kindly on Jobs, whom he viewed as a son.
Every high-tech company needs a technical visionary, someone who has a clear idea about the future and is willing to do whatever it takes to push the rest of the operation in that direction. In the earliest days of Apple, Woz was the technical visionary along with doing nearly everything else. His job was to see the potential product that could be built from a pile of computer chips. But that was back when the world was simpler and the paradigm was to bring to the desktop something that emulated a mainframe computer terminal. After 1981, Woz was gone, and it was time for someone else to take the visionary role. The only people inside Apple who really wanted that role were Jef Raskin and Steve Jobs.
Raskin was an iconoclastic engineer who first came to Apple to produce user manuals for the Apple II. His vision of the future was a very basic computer that would sell for around $600 -- a computer so easy to use that it would require no written instructions, no user training, and no product support from Apple. The new machine would be as easy and intuitive to use as a toaster and would be sold at places like Sears and K-Mart. Raskin called his computer Macintosh.
Jobs’s ambition was much grander. He wanted to lead the development of a radical and complex new computer system that featured a graphical user interface and mouse (Raskin preferred keyboards). Jobs’s vision was code-named Lisa.
Depending on who was talking and who was listening, Lisa was either an acronym for “large integrated software architecture,” or for “local integrated software architecture” or the name of a daughter born to Steve Jobs and Nancy Rogers in May 1978. Jobs, the self-centered adoptee who couldn’t stand competition from a baby, at first denied that he was Lisa’s father, sending mother and baby for a time onto the Santa Clara County welfare rolls. But blood tests and years later, Jobs and Lisa, now a teenager, are often seen rollerblading on the streets of Palo Alto. Jobs and Rogers never married.
Lisa, the computer, was born after Jobs toured Xerox PARC in December 1979, seeing for the first time what Bob Taylor’s crew at the Computer Science Lab had been able to do with bitmapped video displays, graphical user interfaces, and mice. “Why aren’t you marketing this stuff?” Jobs asked in wonderment as the Alto and other systems were put through their paces for him by a PARC scientist named Larry Tesler. Good question.
Steve Jobs saw the future that day at PARC and decided that if Xerox wouldn’t make that future happen, then he would. Within days, Jobs presented to Markkula his vision of Lisa, which included a 16-bit microprocessor, a bit-mapped display, a mouse for controlling the on-screen cursor, and a keyboard that was separate from the main computer box. In other words, it was a Xerox Alto, minus the Alto’s built-in networking. “Why would anyone need an umbilical cord to his company?” Jobs asked.
Lisa was a vision that made the as-yet-unconceived IBM PC look primitive in comparison. And though he didn’t know it at the time, it was also a development job far bigger than Steve Jobs could even imagine.
One of the many things that Steve Jobs didn’t know in those days was Cringely’s Second Law, which I figured out one afternoon with the assistance of a calculator and a six-pack of Heineken. Cringely’s Second Law states that in computers, ease of use with equivalent performance varies with the square root of the cost of development. This means that to design a computer that’s ten times easier to use than the Apple II, as the Lisa was intended to be, would cost 100 times as much money. Since it cost around $500,000 to develop the Apple II, Cringely’s Second Law says the cost of building the Lisa should have been around $50 million. It was.
Let’s pause the history for a moment and consider the implications of this law for the next generation of computers. There was no significant difference in ease of use between Lisa and its follow-on, the Macintosh. So if you’ve been sitting on your hands waiting to buy a computer that is ten times as easy to use as the Macintosh, remember that it’s going to cost around $5 billion (1982 dollars, too) to develop. Apple’s R&D budget is about $500 million, so don’t expect that computer to come from Cupertino. IBM’s R&D budget is about $3 billion, but that’s spread across many lines of computers, so don’t expect your ideal machine to come from Big Blue either. The only place such a computer is going to come from, in fact, is a collaboration of computer and semiconductor companies. That’s why the computer world is suddenly talking about Open Systems, because building hardware and software that plug and play across the product lines and R&D budgets of a hundred companies is the only way that future is going to be born. Such collaboration, starting now, will be the trend in the next century, so put your wallet away for now.
Meanwhile, back in Cupertino, Mike Markkula knew from his days working in finance at Intel just how expensive a big project could become. That’s why he chose John Couch, a software professional with a track record at Hewlett-Packard, to head the super-secret Lisa project. Jobs was crushed by losing the chance to head the realization of his own dream.
Couch was yet another Adam Cartwright, and Jobs hated him.
The new ideas embodied in Lisa would have been Jobs’s way of breaking free from his type casting as Little Joe. He would become, instead, the prophet of a new kind of computing, taking his power from the ideas themselves and selling this new type of computing to Apple and to the rest of the world. And Apple accepted both his dream and the radical philosophy behind it, which said that technical leadership was as important as making money, but Markkula still wouldn’t let him lead the project.
Vision, you’ll recall, is the ability to see potential in the work of others. The jump from having vision to being a visionary; though, is a big one. The visionary is a person who has both the vision and the willingness to put everything on the line, including his or her career, to further that vision. There aren’t many real visionaries in this business, but Steve Jobs is one. Jobs became the perfect visionary, buying so deeply into the vision that he became one with it. If you were loyal to Steve, you embraced his vision. If you did not embrace his vision, you were either an enemy or brain-dead.
So Chairman Jobs assigned himself to Raskin’s Macintosh group, pushed the other man aside, and converted the Mac into what was really a smaller, cheaper Lisa. As the holder of the original Lisa vision, Jobs ignorantly criticized the big-buck approach being taken by Couch and Larry Tesler, who had by then joined Apple from Xerox PARC to head Lisa software development. Lisa was going to be too big, too slow, too expensive, Jobs argued. He bet Couch $5,000 that Macintosh would hit the market first. He lost.
The early engineers were nearly all gone from Apple by the time Lisa development began. The days when the company ran strictly on adrenalin and good ideas were fading. No longer did the whole company meet to put computers in boxes so they could ship enough units by the end of the month. With the introduction of[ the Apple III in 1980, life had become much more businesslike at Apple, which suddenly had two product lines to sell.
It was still the norm, though, for technical people to lead each product development effort, building products that they wanted to play with themselves rather than products that customers wanted to buy. For example, there was Mystery House, Apple's own spreadsheet, intended to kill VisiCalc because everyone who worked on Apple II software decided en masse that they hated Terry Opdendyk, president of VisiCorp, and wanted to hurt him by destroying his most important product. There was no real business reason to do Mystery House, just spite. The spreadsheet was written by Woz and Randy Wigginton and never saw action under the Apple label because it was given up later as a bargaining chip in negotiations between Apple and Microsoft. Some Mystery House code lives on today in a Macintosh spreadsheet from Ashton-Tate called Full Impact.
But John Couch and his Lisa team were harbingers of a new professionalism at Apple. Apple had in Lisa a combination of the old spirit of Apple -- anarchy, change, new stuff, engineers working through the night coming up with great ideas -- and the introduction of the first nontechnical marketers, marketers with business degrees -- the "suits." These nontechnical marketers were, for the first time at Apple, the project coordinators, while the technical people were just members of the team. And rather than the traditional bunch of hackers from Homestead High, Lisa hardware was developed by a core of engineers hired away from Hewlett-Packard and DEC, while the software was developed mainly by ex-Xerox programmers, who were finally getting a chance to bring to market a version of what they'd worked on at Xerox PARC for most of the preceding ten years. Lisa was the most professional operation ever mounted at Apple -- far more professional than anything that has followed.
Lisa was ahead of its time. When most microcomputers came with a maximum of 64,000 characters of memory, the Lisa had 1 million characters. When most personal computers were capable of doing only one task at a time, Lisa could do several. The computer was so easy to use that customers were able to begin working within thirty minutes of opening the box. Setting up the system was so simple that early drafts of the directions used only pictures, no words. With its mouse, graphical user interface, and bit-mapped screen, Lisa was the realization of nearly every design feature invented at Xerox PARC except networking.
Lisa was professional all the way. Painstaking research went into every detail of the user interface, with arguments ranging up and down the division about what icons should look like, whether on-screen windows should just appear and disappear or whether they should zoom in and out. Unlike nearly every other computer in the world, Lisa had no special function keys to perform complex commands in a single keystroke, and offered no obscure ways to hold down three keys simultaneously and, by so doing, turn the whole document into Cyrillic, or check its spelling, or some other such nonsense.
To make it easy to use, Lisa followed PARC philosophy, which meant that no matter what program you were using, hitting the E key just put an E on-screen rather than sending the program into edit mode, or expert mode, or erase mode. Modes were evil. At PARC, you were either modeless or impure, and this attitude carried over to Lisa, where Larry Tesler's license plate read no modes. Instead of modes, Lisa had a very simple keyboard that was used in conjunction with the mouse and onscreen menus to manipulate text and graphics without arcane commands.
Couch left nothing to chance. Even the problem of finding a compelling application for Lisa was covered; instead of waiting for a Dan Bricklin or a Mitch Kapor to introduce the application that would make corporate America line up to buy Lisas, Apple wrote its own software -- seven applications covering everything that users of microcomputers were then doing with their machines, including a powerful spreadsheet.
Still, when Lisa hit the market in 1983, it failed. The problem was its $10,000 price, which meant that Lisa wasn't really a personal computer at all but the first real workstation. Workstations can cost more than PCs because they are sold to companies rather than to individuals, but they have to be designed with companies in mind, and Lisa wasn't. Apple had left out that umbilical cord to the company that Steve Jobs had thought unnecessary. At $10,000, Lisa was being sold into the world of corporate mainframes, and the Apple's inability to communicate with those mainframes doomed it to failure.
Despite the fact that Lisa had been his own dream and Apple was his company, Steve Jobs was thrilled with Lisa's failure, since it would make the inevitable success of Macintosh all the more impressive.
Back in the Apple II and Apple III divisions, life still ran at a frenetic pace. Individual contributors made major decisions and worked on major programs alone or with a very few other people. There was little, if any, management, and Apple spent so much money, it was unbelievable. With Raskin out of the way, that's how Steve Jobs ran the Macintosh group too. The Macintosh was developed beneath a pirate flag. The lobby of the Macintosh building was lined with Ansel Adams prints, and Steve Jobs's BMW motorcycle was parked in a corner, an ever-present reminder of who was boss. It was a renegade operation and proud of it.
When Lisa was taken from him, Jobs went through a paradigm shift that combined his dreams for the Lisa with Raskin's idea of appliancelike simplicity and low cost. Jobs decided that the problem with Lisa was not that it lacked networking capability but that its high price doomed it to selling in a market that demanded networking. There'd be no such problem with Macintosh, which would do all that Lisa did but at a vastly lower price. Never mind that it was technically impossible.
Lisa was a big project, while Macintosh was much smaller because Jobs insisted on an organization small enough that he could dominate every member, bending each to his will. He built the Macintosh on the backs of Andy Hertzfeld, who wrote the system software, and Burrell Smith, who designed the hardware. All three men left their idiosyncratic fingerprints all over the machine. Hertzfeld gave the Macintosh an elegant user interface and terrific look and feel, mainly copied from Lisa. He also made Macintosh very, very difficult to write programs for. Smith was Jobs's ideal engineer because he'd come up from the Apple II service department ("I made him," Jobs would say). Smith built a clever little box that was incredibly sophisticated and nearly impossible to manufacture.
Jobs's vision imposed so many restraints on the Macintosh that it's a wonder it worked at all. In contrast to Lisa, with its million characters of memory, Raskin wanted Macintosh to have only 64,000 characters -- a target that Jobs continued to aim for until long past the time when it became clear to everyone else that the machine needed more memory. Eventually, he "allowed" the machine to grow to 128,000 characters, though even with that amount of memory, the original 128K Macintosh still came to fit people's expectations that mechanical things don't work. Apple engineers, knowing that funher memory expansion was inevitable, built in the capability to expand the 128K machine to 512K, though they couldn't tell Jobs what they had done because he would have made them change it back.
Markkula gave up the presidency of Apple at about the time Lisa was introduced. As chairman, Jobs went looking for a new president, and his first choice was Don Estridge of IBM, who turned the job down. Jobs's second choice was John Sculley, who came over from PepsiCo for the same package that Estridge had rejected. Sculley was going to be as much Jobs's creation as Burrell Smith had been. It was clear to the Apple technical staff that Sculley knew nothing at all about computers or the computer business. They dismissed him, and nobody even noticed when Sculley was practically invisible during his first months at Apple. They thought of him as Jobs's lapdog, and that's what he was.
With Mike Markkula again in semiretirement, concentrating on his family and his jet charter business, there was no adult supervision in place at Apple, and Jobs ran amok. With total power, the willful kid who'd always resented the fact that he had been adopted, created at Apple a metafamily in which he played the domineering, disrespectful, demanding type of father that he imagined must have abandoned him those many years ago.
Here's how Steve-As-Dad interpreted Management By Walking Around. Coming up to an Apple employee, he'd say, "I think Jim (another employee] is shit. What do you think?”
If the employee agrees that Jim is shit, Jobs went to the next person and said, “Bob and I think Jim is shit. What do you think?”
If the first employee disagreed and said that Jim is not shit, Jobs would move on to the next person, saying, “Bob and I think Jim is great. What do you think?”
Public degradation played an important role too. When Jobs finally succeeded in destroying the Lisa division, he spoke to the assembled workers who were about to be reassigned or laid off. “I see only B and C players here,” he told the stunned assemblage. “All the A players work for me in the Macintosh division. I might be interested in hiring two or three of you [out of 300]. Don’t you wish you knew which ones I’ll choose?”
Jobs was so full of himself that he began to believe his own PR, repeating as gospel stories about him that had been invented to help sell computers. At one point a marketer named Dan’l Lewin stood up to him, saying, “Steve, we wrote this stuff about you. We made it up.”
Somehow, for all the abuse he handed out, nobody attacked Jobs in the corridor with a fire axe. I would have. Hardly anyone stood up to him. Hardly anyone quit. Like the Bhagwan, driving around Rancho Rajneesh each day in another Rolls-Royce, Jobs kept his troops fascinated and productive. The joke going around said that Jobs had a “reality distortion field” surrounding him. He’d say something, and the kids in the Macintosh division would find themselves replying, “Drink poison Kool-Aid? Yeah, that makes sense.”
Steve Jobs gave impossible tasks, never acknowledging that they were impossible. And, as often happens with totalitarian rulers, most of his impossible demands were somehow accomplished, though at a terrible cost in ruined careers and failed marriages.
Beyond pure narcissism, which was there in abundance, Jobs used these techniques to make sure he was surrounding himself with absolutely the best technical people. The best, nothing but the best, was all he would tolerate, which meant that there were crowds of less-than-godlike people who went continually up and down in Jobs’s estimation, depending on how much he needed them at that particular moment. It was crazy-making.
Here’s a secret to getting along with Steve Jobs: when he screams at you, scream back. Take no guff from him, and if he’s the one who is full of shit, tell him, preferably in front of a large group of amazed underlings. This technique works because it gets Jobs’s attention and fits in with his underlying belief that he probably is wrong but that the world just hasn’t figured that out yet. Make it clear to him that you, at least, know the truth.
Jobs had all kinds of ideas he kept throwing out. Projects would stop. Projects would start. Projects would get so far and then be abandoned. Projects would go on in secret, because the budget was so large that engineers could hide things they wanted to do, even though that project had been canceled or never approved. For example, Jobs thought at one point that he had killed the Apple III, but it went on anyhow.
Steve Jobs created chaos because he would get an idea, start a project, then change his mind two or three times, until people were doing a kind of random walk, continually scrapping and starting over. Apple was confusing suppliers and wasting huge amounts of money doing initial manufacturing steps on products that never appeared.
Despite the fact that Macintosh was developed with a much smaller team than Lisa and it took advantage of Lisa technology, the little computer that was supposed to have sold at K-Mart for $600 ended up costing just as much to bring to market as Lisa had. From $600, the price needed to make a MacProfit doubled and tripled until the Macintosh could no longer be imagined as a home computer. Two months before its introduction, Jobs declared the Mac to be a business computer, which justified the higher price.
Apple clearly wasn’t very disciplined. Jobs created some of that, and a lot of it was created by the fact that it didn’t matter to him whether things were organized. Apple people were rewarded for having great ideas and for making great technical contributions but not for saving money. Policies that looked as if they were aimed at saving money actually had other justifications. Apple people still share hotel rooms at trade shows and company meetings, for example, but that’s strictly intended to limit bed hopping, not to save money. Apple is a very sexy company, and Jobs wanted his people to lavish that libido on the products rather than on each other.
Oh, and Apple people were also rewarded for great graphics; brochures, ads, everything that represented Apple to its customers and dealers, had to be absolutely top quality. In addition, the people who developed Apple’s system of dealers were rewarded because the company realized early on that this was its major strength against IBM.
A very dangerous thing happened with the introduction of the Macintosh. Jobs drove his development team into the ground, so when the Mac was introduced in 1984, there was no energy left, and the team coasted for six months and then fell apart. And during those six months, John Sculley was being told that there were development projects going on in the Macintosh group that weren’t happening. The Macintosh people were just burned out, the Lisa Division was destroyed and its people were not fully integrated into the Macintosh group, so there was no new blood.
It was a time when technical people should have been fixing the many problems that come with the first version of any complex high-tech product. But nobody moved quickly to fix the problems. They were just too tired.
The people who made the Macintosh produced a miracle, but that didn’t mean their code was wonderful. The software development tools to build applications like spreadsheets and word processors were not available for at least two years. Early Macintosh programs had to be written first on a Lisa and then recompiled to run on the Mac. None of this mattered to Jobs, who was in heaven, running Apple as his own private psychology experiment, using up people and throwing them away. Attrition, strangled marriages, and destroyed careers were, unimportant, given the broader context of his vision.
The idea was to have a large company that somehow maintained a start-up philosophy, and Jobs thrived on it. He planned to develop a new generation of products every eighteen months, each one as radically different from the one before as the Macintosh had been from the Apple II. By 1990, nobody would even remember the Macintosh, with Apple four generations down the road. Nothing was sacred except the vision, and it became clear to him that the vision could best be served by having the people of Apple live and work in the same place. Jobs had Apple buy hundreds of acres in the Coyote Valley, south of San Jose, where he planned to be both employer and landlord for his workers, so they’d never ever have a reason to leave work.
Unchecked, Jobs was throwing hundreds of millions of dollars at his dream, and eventually the drain became so bad that Mike Markkula revived his Ben Cartwright role in June 1985. By this point Sculley had learned a thing or two in his lapdog role and felt ready to challenge Jobs. Again, Markkula decided against Jobs, this time backing Sculley in a boardroom battle that led to Jobs’s being banished to what he called “Siberia”— Bandley 6, an Apple building with only one office. It was an office for Steve Jobs, who no longer had any official duties at the company he had founded in his parents’ garage. Jobs left the company soon after.
Here’s what was happening at Apple in the early 1980s that Wall Street analysts didn’t know. For its first five years in business, Apple did not have a budget. Nobody really knew how much money was coming in or going out or what the company was buying. In the earliest days, this wasn’t a problem because a company that was being run by characters who not long before had made $3 per hour dressing up as figures from Alice in Wonderland at a local shopping mall just wasn’t inclined toward extravagance. Later, it seemed that the money was coming in so fast that there was no way it could all be spent. In fact, when the first company budget happened in 1982, the explanation was that Apple finally had enough people and projects where they could actually spend all the money they made if they didn’t watch it. But even when they got a budget, Apple’s budgeting process was still a joke. All budgets were done at the same time, so rather than having product plans from which support plans and service plans would flow -- a logical plan based on products that were coming out -- everybody all at once just said what they wanted. Nothing was coordinated.
It really wasn’t until 1985 that there was any logical way of making the budget, where the product people would say what products would come out that year, and then the marketing people would say what they were going to do to market these products, and the support people would say how much it was going to cost to support the products.
It took Sculley at least six months, maybe a year, from the time he deposed Jobs to understand how out of control things were. It was total anarchy. Sculley’s major budget gains in the second half of 1985 came from laying off 20 percent of the work force -- 1,200 people -- and forcing managers to make sense of the number of suppliers they had and the spare parts they had on hand. Apple had millions of dollars of spare parts that were never going to be used, and many of these were sold as surplus. Sculley instituted some very minor changes in 1986 -- reducing the number of suppliers and beginning to simplify the peripherals line so that Macintosh printers, for example, would also work with the Apple II, Apple III, and Lisa.
The large profits that Sculley was able to generate during this period came entirely from improved budgeting and from simply cancelling all the whacko projects started by Steve Jobs. Sculley was no miracle worker.
Who was this guy Sculley? Raised in Bermuda, scion of an old-line, old-money family, he trained as an architect, then worked in marketing at PepsiCo for his entire career before joining Apple. A loner, his specialty at the soft drink maker seemed to be corporate infighting, a habit he brought with him to Apple.
Sculley is not an easy man to be with. He is uneasy in public and doesn’t fit well with the casual hacker class that typified the Apple of Woz and Jobs. Spend any time with Sculley and you’ll notice his eyes, which are dark, deep-set, and hawklike, with white visible on both sides of the iris and above it when you look at him straight on. In traditional Japanese medicine, where facial features are used for diagnosis, Sculley’s eyes are called sanpaku and are attributed to an excess of yang. It’s a condition that Japanese doctors associate with people who are prone to violence.
With Jobs gone, Apple needed a new technical visionary. Sculley tried out for the role, and supported people like Bill Atkinson, Larry Tesler, and Jean-Louis Gassee as visionaries, too. He tried to send a message to the troops that everything would be okay, and that wonderful new products would continue to come out, except in many ways they didn’t.
Sculley and the others were surrogate visionaries compared to Jobs. Sculley’s particular surrogate vision was called Knowledge Navigator, mapped out in an expensive video and in his book, Odyssey. It was a goal, but not a product, deliberately set in the far future. Jobs would have set out a vision that he intended his group actually to accomplish. Sculley didn’t do that because he had no real goal.
By rejecting Steve Jobs’s concept of continuous revolution but not offering a specific alternative program in its place, Sculley was left with only the status quo. He saw his job as milking as much money as possible out of the current Macintosh technology and allowing the future to take care of itself. He couldn’t envision later generations of products, and so there would be none. Today the Macintosh is a much more powerful machine, but it still has an operating system that does only one thing at a time. It’s the same old stuff, only faster.
And along the way, Apple abandoned the $1-billion-per-year Apple II business. Steve Jobs had wanted the Apple II to die because it wasn’t his vision. Then Jean-Louis Gassee came in from Apple France and used his background in minicomputers to claim that there really wasn’t a home market for personal computers. Earth to Jean-Louis! Earth to Jean-Louis! So Apple ignored the Macintosh home market to develop the Macintosh business market, and all the while, the company’s market share continued to drop.
Sculley didn’t have a clue about which way to go. And like Markkula, he faded in and out of the business, residing in his distant tower for months at a time while the latest group of subordinates would take their shot at running the company. Sculley is a smart guy but an incredibly bad judge of people, and this failing came to permeate Apple under his leadership.
Sculley falls in love with people and gives them more power than they can handle. He chose Gassee to run Apple USA and the phony-baloney Frenchman caused terrific damage during his tenure. Gassee correctly perceived that engineers like to work on hot products, but he made the mistake of defining “hot” as “high end,” dooming Apple’s efforts in the home and small business markets.
Gassee’s organization was filled with meek sycophants. In his staff meetings, Jean-Louis talked, and everyone else listened. There was no healthy discussion, no wild and crazy brainstorming that Apple had been known for and that had produced the company’s most innovative programs. It was like Stalin’s staff meeting.
Another early Sculley favorite was Allen Loren, who came to Apple as head of management information systems -- the chief administrative computer guy -- and then suddenly found himself in charge of sales and marketing simply because Sculley liked him. Loren was a good MIS guy but a bad marketing and sales guy.
Loren presided over Apple’s single greatest disaster, the price increase of 1988. In an industry built around the concept of prices’ continually dropping, Loren decided to raise prices on October 1,1988, in an effort to raise Apple’s sinking profit margins. By raising prices Loren was fighting a force of nature, like asking the earth to reverse its direction of rotation, the tides to stop, mothers everywhere to stop telling their sons to get haircuts. Ignorantly, he asked the impossible, and the bottom dropped out of Apple’s market. Sales tumbled, market share tumbled. Any momentum that Apple had was lost, maybe for years, and Sculley allowed that to happen.
Loren was followed as vice-president of marketing by David Hancock, who was known throughout Apple as a blowhard. When Apple marketing should have been trying to recover from Loren’s pricing mistake, the department did little under Hancock. The marketing department was instead distracted by nine reorganizations in less than two years. People were so busy covering their asses that they weren’t working, so Apple’s business in 1989 and 1990 showed what happens when there is no marketing at all.
The whole marketing operation at Apple is now run by former salespeople, a dangerous trend. Marketing is the creation of long-term demand, while sales is execution of marketing strategies. Marketing is buying the land, choosing what crop to grow, planting the crop, fertilizing it, and then deciding when to harvest. Sales is harvesting the crop. Salespeople in general don’t think strategically about the business, and it’s this short-term focus that’s prevalent right now at Apple.
When Apple introduced its family of lower-cost Macintoshes in the fall of 1990, marketing was totally unprepared for their popularity. The computer press had been calling for lower-priced Macs, but nobody inside Apple expected to sell a lot of the boxes. Blame this on the lack of marketing, and also blame it on the demise, two years before, of Apple’s entire market research department, which fell in another political game. When the Macintosh Classic, LC, and Ilsi appeared, their overwhelming popularity surprised, pleased, but then dismayed Apple, which was still staffing up as a company that sold expensive computers. Profit margins dropped despite an 85 percent increase in sales, and Sculley found himself having to lay off 15 percent of Apple’s work force, because of unexpected success that should have been, could have been, planned for.
Sculley’s current favorite is Fred Forsythe, formerly head of manufacturing but now head of engineering, with major responsibility for research and development. Like Loren, Forsythe was good at the job he was originally hired to do, but that does not at all mean he’s the right man for the R&D job. Nor is Sculley, who has taken to calling himself Apple’s Chief Technical Officer-- an insult to the company’s real engineers.
So why does Sculley make these terrible personnel moves? Maybe he wants to make sure that people in positions of power are loyal to him, as all these characters are. And by putting them in jobs they are not really up to doing, they are kept so busy that there is no time or opportunity to plot against Sculley. It’s a stupid reason, I know, and one that has cost Apple billions of dollars, but it’s the only one that makes any sense.
With all the ebb and flow of people into and out of top management positions at Apple, it reached the point where it was hard to get qualified people even to accept top positions, since they knew they were likely to be fired. That’s when Sculley started offering signing bonuses. Joe Graziano, who’d left Apple to be the chief financial officer at Sun Microsystems, was lured back with a $1.5 million bonus in 1990. Shareholders and Apple employees who weren’t raking in such big rewards complained about the bonuses, but the truth is that it was the only way Sculley could get good people to work for him. (Other large sums are often counted in “Graz” units. A million and a half dollars is now known as “1 Graz”—a large unit of currency in Applespeak.)
The rest of the company was as confused as its leadership. Somehow, early on, reorganizations -- ”reorgs” -- became part of the Apple culture. They happen every three to six months and come from Apple’s basic lack of understanding that people need stability in order to be able to work together.
Reorganizations have become so much of a staple at Apple that employees categorize them into two types. There’s the “Flint Center reorganization,” which is so comprehensive that Apple calls its Cupertino workers into the Flint Center auditorium at DeAnza College to hear the top executives explain it. And there’s the smaller “lunchroom reorganization,” where Apple managers call a few departments into a company cafeteria to hear the news.
The problem with reorgs is that they seem to happen overnight, and many times they are handled by groups being demolished and people being told to go to Human Resources and find a new job at Apple. And so the sense is at Apple that if you don’t like where you are, don’t worry, because three to six months from now everything is going to be different. At the same time, though, the continual reorganizations mean that nobody has long-term responsibility for anything. Make a bad decision? Who cares! By the time the bad news arrives, you’ll be gone and someone else will have to handle the problems.
If you do like your job at Apple, watch it, because unless you are in some backwater that no one cares about and is severely understaffed, your job may be gone in a second, and you may be “on the street,” with one or two months to find a job at Apple.
Today, the sense of anomie -- alienation, disconnectedness --at Apple is major. The difference between the old Apple, which was crazy, and the new Apple is anomie. People are alienated. Apple still gets the bright young people. They come into Apple, and instead of getting all fired up about something, they go through one or two reorgs and get disoriented. I don’t hear people who are really happy to be at Apple anymore. They wonder why they are there, because they’ve had two bosses in six months, and their job has changed twice. It’s easy to mix up groups and end up not knowing anyone. That’s a real problem.
“I don’t know what will happen with Apple in the long term,” said Larry Tesler. “It all depends on what they do.”
They? Don’t you mean we, Larry? Has it reached the point where an Apple vice-president no longer feels connected to his own company?
With the company in a constant state of reorganization, there is little sense of an enduring commitment to strategy at Apple. It’s just not in the culture. Surprisingly, the company has a commitment to doing good products; it’s the follow-through that suffers. Apple specializes in flashy product introductions but then finds itself wandering away in a few weeks or months toward yet another pivotal strategy and then another.
Compare this with Microsoft, which is just the opposite, doing terrific implementation of mediocre products. For example, in the area of multimedia computing -- the hot new product classification that integrates computer text, graphics, sound, and full-motion video -- Microsoft’s Multimedia Windows product is ho-hum technology acquired from a variety of sources and not very well integrated, but the company has implemented it very well. Microsoft does a good roll-out, offers good developer support, and has the same people leading the operation for years and years. They follow the philosophy that as long as you are the market leader and are still throwing technology out there, you won’t be dislodged.
Microsoft is taking the Japanese approach of not caring how long or how much money it takes to get multimedia right. They’ve been at it for six years so far, and if it takes another six years, so be it. That’s what makes me believe Microsoft will continue to be a factor in multimedia, no matter how bad its products are.
In contrast to Microsoft, Apple has a very elegant multimedia architecture called QuickTime, which does for time-based media what Apple’s QuickDraw did for graphics. QuickTime has tools for integrating video, animation, and sound into Macintosh programs. It automatically synchronizes sound and images and provides controls for playing, stopping, and editing video sequences. QuickTime includes technology for compressing images so they require far less memory for storage. In short, QuickTime beats the shit out of Microsoft’s Multimedia Extensions for Windows, but Apple is also taking a typical short-term view. Apple produced a flashy intro, but has no sense of enduring commitment to its own strategy.
The good and the bad that was Apple all came from Steve Jobs, who in 1985 was once again an orphan and went off to found another company --NeXT Inc. -- and take another crack at playing the father role. Steve sold his Apple stock in a huff (and at a stupidly low price), determined to do it all over again -- to build another major computer company -- and to do it his way.
“Steve never knew his parents,” recalled Trip Hawkins, who went to Apple as manager of market planning in 1979. “He makes so much noise in life, he cries so loud about everything, that I keep thinking he feels that if he just cries loud enough, his real parents will hear and know that they made a mistake giving him up.”
Fourteenth in a series. We resume Robert X. Cringely's serialization of his 1991 tech-industry classic Accidental Empires after short repast during a period of rapid-fire news.
This installment reveals much about copying -- a hot topic in lawsuits today -- and how copyrights and patents apply to software and why the latter for a long time didn't.
Mitch Kapor, the father of Lotus 1-2-3, showed up one day at my house but wouldn’t come inside. “You have a cat in there, don’t you?” he asked.
Not one cat but two, I confessed. I am a sinner.
Mitch is allergic to cats. I mean really allergic, with an industrial-strength asthmatic reaction. “It’s only happened a couple of times”, he explained, “but both times I thought I was going to die”.
People have said they are dying to see me, but Kapor really means it.
At this point we were still standing in the front yard, next to Kapor’s blue rental car. The guy had just flown cross-country in a Canadair Challenger business jet that costs $3,000 per hour to run, and he was driving a $28.95-per-day compact from Avis. I would have at least popped for a T-Bird.
We were still standing in the front yard because Mitch Kapor needed to use the bathroom, and his mind was churning out a risk/reward calculation, deciding whether to chance contact with the fierce Lisa and Jeri, our kitty sisters.
“They are generally sleeping on the clean laundry about this time”, I assured him.
He decided to take a chance and go for it.
“You won’t regret it”, I called after him.
Actually, I think Mitch Kapor has quite a few regrets. Success has placed a heavy burden on Mitch Kapor.
Mitch is a guy who was in the right place at the right time and saw clearly what had to be done to get very, very rich in record time. Sure enough, the Brooklyn-born former grad student, recreational drug user, disc jockey, Transcendental Meditation teacher, mental ward counselor, and so-so computer programmer today has a $6 million house on 22 acres in Brookline, Massachusetts, the $12 million jet, and probably the world’s foremost collection of vintage Hawaiian shirts. So why isn’t he happy?
I think Mitch Kapor isn’t happy because he feels like an imposter.
This imposter thing is a big problem for America, with effects that go far beyond Mitch Kapor. Imposters are people who feel that they haven’t earned their success, haven’t paid their dues -- that it was all too easy. It isn’t enough to be smart, we’re taught. We have to be smart, and hard working, and long suffering. We’re supposed to be aggressive and successful, but our success is not supposed to come at the expense of anyone else. Impossible, right?
We got away from this idea for a while in the 1980s, when Michael Milken and Donald Trump made it okay to be successful on brains and balls alone, but look what’s happened to them. The tide has turned against the easy bucks, even if those bucks are the product of high intelligence craftily applied, as in the case of Kapor and most of the other computer millionaires. We’re in a resurgence of what I call the guilt system, which can be traced back through our educational institutions all the way to the medieval guild system.
The guild system, with its apprentices, journeymen, and masters, was designed from the start to screen out people, not encourage them. It took six years of apprenticeship to become a journeyman blacksmith. Should it really take six years for a reasonably intelligent person to learn how to forge iron? Of course not. The long apprenticeship period was designed to keep newcomers out of the trade while at the same time rewarding those at the top of the profession by giving them a stream of young helpers who worked practically for free.
This concept of dues paying and restraint of trade continues in our education system today, where the route to a degree is typically cluttered with requirements and restrictions that have little or nothing to do with what it was we came to study. We grant instant celebrity to the New Kids on the Block but support an educational system that takes an average of eight years to issue each Ph.D.
The trick is to not put up with the bullshit of the guild system. That’s what Bill Gates did, or he would have stayed at Harvard and become a near-great mathematician. That’s what Kapor did, too, in coming up with 1-2-3, but now he’s lost his nerve and is paying an emotional price. Doe-eyed Mitch Kapor has scruples, and he’s needlessly suffering for them.
We’re all imposters in a way -- I sure am -- but poor Mitch feels guilty about it. He knows that it’s not brilliance, just cleverness, that’s the foundation of his fortune. What’s wrong with that? He knows that timing and good luck played a much larger part in the success of 1-2-3 than did technical innovation. He knows that without Dan Bricklin and VisiCalc, 1-2-3 and the Kapor house and the Kapor jet and the Kapor shirt collection would never have happened.
“Relax and enjoy it”, I say, but Mitch Kapor won’t relax. Instead, he crisscrosses the country in his jet, trying to convince himself and the world that 1-2-3 was not a fluke and that he can do it all again. He’s also trying to convince universities that they ought to promote a new career path called software designer, which is the name he has devised for his proto-technical function. A software designer is a smart person who thinks a lot about software but isn’t a very good programmer. If Kapor is successful in this educational campaign, his career path will be legitimized and be made guilt free but at the cost of others having to pay dues, not knowing that they shouldn’t really have to.
“Good artists copy”, said Pablo Picasso. “Great artists steal”.
I like this quotation for a lot of reasons, but mainly I like it because the person who told it to me was Steve Jobs, co-founder of Apple Computer, virtual inventor of the personal computer business as it exists today, and a died-in-the-wool sociopath. Sometimes it takes a guy like Steve to tell things like they really are. And the way things really are in the computer business is that there is a whole lot of copying going on. The truly great ideas are sucked up quickly by competitors, and then spit back on the market in new products that are basically the old products with slight variations added to improve performance and keep within the bounds of legality. Sometimes the difference between one computer or software program and the next seems like the difference between positions 63 and 64 in the Kama Sutra, where 64 is the same as 63 but with pinkies extended.
The reason for this copying is that there just aren’t very many really great ideas in the computer business -- ideas good enough and sweeping enough to build entire new market segments around. Large or small, computers all work pretty much the same way -- not much room for earth-shaking changes there. On the software side, there are programs that simulate physical systems, or programs that manipulate numbers (spreadsheets), text and graphics (word processors and drawing programs), or raw data (databases). And that’s about the extent of our genius so far in horizontal applications -- programs expected to appeal to nearly every computer user.
These apparent limits on the range of creativity mean that Dan Bricklin invented the first spreadsheet, but you and I didn’t, and we never can. Despite our massive intelligence and good looks, the best that we can hope to do is invent the next spreadsheet or maybe the best spreadsheet, at least until our product, too, is surpassed. With rare exceptions, what computer software and hardware engineers are doing every day is reinventing things. Reinventing isn’t easy, either, but it can still be very profitable.
The key to profitable reinvention lies in understanding the relationship between computer hardware and software. We know that computers have to exist before programmers will write software specifically for them. We also know that people usually buy computers to run a single compelling software application. Now we add in longevity -- the fact that computers die young but software lives on, nearly forever. It’s always been this way. Books crumble over time, but the words contained in those books -- the software -- survive as long as readers are still buying and publishers are still printing new editions. Computers don’t crumble -- in fact, they don’t even wear out -- but the physical boxes are made obsolete by newer generations of hardware long before the programs and data inside have lost their value.
What software does lose in the transition from one hardware generation to the next is an intimate relationship with that hardware. Writing VisiCalc for the Apple II, Bob Frankston had the Apple hardware clearly in mind at all times and optimized his work to run on that machine by writing in assembly language -- the internal language of the Apple II’s MOStek 6502 microprocessor -- rather than in some higher-level language like BASIC or[PS5] FORTRAN. When VisiCalc was later translated to run on other types of computers, it lost some of that early intimacy, and performance suffered.
But even if intimacy is lost, software hangs on because it is so hard to produce and so expensive to change.
Moore’s Law says that the number of transistors that can be built on a given area of silicon doubles every eighteen months, which means that a new generation of faster computer hardware appears every eighteen months too. Cringely’s Law (I just thought this up) says that people who actually rely on computers in their work won’t tolerate being more than one hardware generation behind the leading edge. So everyone who can afford to buys a new computer when their present computer is three years old. But do all these users get totally new software every time they buy a new computer to run it on? Not usually, because the training costs of learning to use a new application are often higher than the cost of the new computer to run it on.
Once the accounting firm Ernst & Young, with its 30,000 personal computers, standardizes on an application, it takes an act of God or the IRS to change software.
Software is more complex than hardware, though most of us don’t see it that way. It seems as if it should be harder to build computers, with their hundreds or thousands of electrical connections, than to write software, where it’s a matter of just saying to the program that a connection exists, right? But that isn’t so. After all, it’s easier to print books than it is to write them.
Try typing on a computer keyboard. What’s happening in there that makes the letters appear on the screen? Type the words “Cringely’s mom wears army boots” while running a spreadsheet program, then using a word processor, then a different word processor, then a database. The internal workings of each program will handle the words differently -- sometimes radically differently -- from the others, yet all run on the same hardware and all yield the same army boots.
Woz designed and built the Apple I all by himself in a couple of months of spare time. Even the prototype IBM PC was slapped together by half a dozen engineers in less than thirty days. Software is harder because it takes the hardware only as a starting point and can branch off in one or many directions, each involving levels of complexity far beyond that of the original machine that just happens to hold the program. Computers are house scaled, while software is building scaled.
The more complex an application is, the longer it will stay in use. It shouldn’t be that way, but it is. By the time a program grows to a million lines of code, it’s too complex to change because no one person can understand it all. That’s why there are mainframe computer programs still running that are more than 30 years old.
In software, there are lots of different ways of solving the same problem. VisiCalc, the original spreadsheet, came up with the idea of cells that had row and column addresses. Right from the start, the screen was filled with these empty cells, and without the cells and their addresses, no work could be done. The second spreadsheet program to come along was called T/Maker and was written by Peter Roizen. T/Maker did not use cells at all and started with a blank screen. If you wanted to total three rows of numbers in T/Maker, you put three plus signs down the left-hand side of the screen as you entered the numbers and then put an equal sign at the bottom to indicate that was the place to show a total. T/Maker also included the ability to put blocks of text in the spreadsheet, and it could even run text vertically as well as horizontally. VisiCalc had nothing like that.
A later spreadsheet, called Framework and written by Robert Carr, replaced cells with what Carr called frames. There were different kinds of frames in Framework, with different properties -- like row-oriented frames and column-oriented frames, for example. Put some row-oriented frames inside a single column-oriented frame, and you had a spreadsheet. That spreadsheet could then be put as a nested layer inside another spreadsheet also built of frames. Mix and match your frames differently, and you had a database or a word processor, all without a cell in sight.
If VisiCalc was an apple, then T/Maker was an orange, and Framework was a rutabaga, yet all three programs could run on identical hardware, and all could produce similar output although through very different means. That’s what I mean by software being more complex than hardware.
Having gone through the agony of developing an application or operating system, then, software developers have a great incentive to greet the next generation of hardware by translating the present software -- "porting" it -- to the new environment rather than starting over and developing a whole new version that takes complete advantage of the new hardware features.
It’s at this intersection of old software and new hardware that the opportunity exists for new applications to take command of the market, offering extra features, combined with higher performance made possible by the fact that the new program was written from scratch for the new computer. This is one of the reasons that WordStar, which once ruled the market for CP/M word processing programs, is only a minor player in today’s MS-DOS world, eclipsed by WordPerfect, a word processing package that was originally designed to run on Data General minicomputers but was completely rewritten for the IBM PC platform.
In both hardware and software, successful reinvention takes place along the edges of established markets. It’s usually not enough just to make another computer or program like all the others; the new product has to be superior in at least one respect. Reinvented products have to be cheaper, or more powerful, or smaller, or have more features than the more established products with which they are intended to compete. These are all examples of edges. Offer a product that is in no way cheaper, faster, or more versatile—that skirts no edges—and buyers will see no reason to switch from the current best-seller.
Even the IBM PC skirted the edges by offering both a 16-bit processor and the IBM nameplate, which were two clear points of differentiation.
Once IBM’s Personal Computer was established as the top-selling microcomputer in America, it not only followed a market edge, it created one. Small, quick-moving companies saw that they had a few months to make enduring places for themselves purely by being the first to build hardware and software add-ons for the IBM PC. The most ambitious of these companies bet their futures on IBM’s success. A hardware company from Cleveland called Tecmar Inc. camped staffers overnight on the doorstep of the Sears Business Center in Chicago to buy the first two IBM PCs ever sold. Within hours, the two PCs were back in Ohio, yielding up their technical secrets to Tecmar’s logic analyzers.
And on the software side, Lotus Development Corp. in Cambridge, Massachusetts, bet nearly $4 million on IBM and on the idea that Lotus 1-2-3 would become the compelling application that would sell the new PC. A spreadsheet program, 1-2-3 became the single most successful computer application of all.
Mitch Kapor had a vision, a moment of astounding insight when it became obvious to him how and why he should write a spreadsheet program like 1-2-3. Vision is a popular word in the computer business and one that has never been fully defined -- until now. Just what the heck does it mean to have such a vision?
George Bush called it the “vision thing.” Vision -- high-tech executives seem to bathe in it or at least want us to think that they do. They are “technical visionaries,” having their “technical visions” so often, and with such blinding insight, that it’s probably not safe for them to drive by themselves on the freeway. The truth is that technical vision is not such a big deal.
Dan Bricklin’s figuring out the spreadsheet, that’s a big deal, but it doesn’t fit the usual definition of technical vision, which is the ability to foresee potential in the work of others. Sure, some engineer working in the bowels of IBM may think he’s come up with something terrific, but it takes having his boss’s boss’s boss’s boss think so, too, and say so at some industry pow-wow before we’re into the territory of vision. Dan Bricklin’s inventing the spreadsheet was a bloody miracle, but Mitch Kapor’s squinting at the IBM PC and figuring out that it would soon be the dominant microcomputer hardware platform -- that’s vision.
There, the secret’s out: vision is only seeing neat stuff and recognizing its market potential. It’s reading in the newspaper that a new highway is going to be built and then quickly putting up a gas station or a fast food joint on what is now a stretch of country road but will soon be a freeway exit.
Most of the so-called visionaries don’t program and don’t design computers -- or at least they haven’t done so for many years. The advantages these people have are that they are listened to by others and, because they are listened to by others, all the real technical people who want the world to know about the neat stuff they are working on seek out these visionaries and give them demonstrations. Potential visions are popping out at these folks all the time. All they have to do is sort through the visions and apply some common sense.
Common sense told Mitch Kapor that IBM would succeed in the personal computer business but that even IBM would require a compelling application -- a spreadsheet written from scratch to take advantage of the PC platform -- to take off in the market. Kapor, who had a pretty fair idea of what was coming down the tube from most of the major software companies, was amazed that nobody seemed to be working on such a native-mode PC spreadsheet, leaving the field clear for him. Deciding to do 1-2-3 was a “no brainer”.
When IBM introduced its computer, there were already two spreadsheet programs that could run on it -- VisiCalc and Multiplan -- both ported from other platforms. Either program could have been the compelling application that IBM’s Don Estridge knew he would need to make the PC successful. But neither VisiCalc nor Multiplan had the performance, the oomph, required to kick IBM PC sales into second gear, though Estridge didn’t know that.
The PC sure looked successful. In the four months that it was available at the end of 1981, IBM sold about 50,000 personal computers, while Apple sold only 135,000 computers for the entire calendar year. By early 1982, the PC was outselling Apple two-to-one, primarily by attracting first-time buyers who were impressed by the IBM name rather than by a compelling application.
At the end of 1981, there were 2 million microcomputers in America. Today there are more than 45 million IBM-compatible PCs alone, with another 10 million to 12 million sold each year. It’s this latter level of success, where sales of 50,000 units would go almost unnoticed, that requires a compelling application. That application -- Lotus 1-2-3 -- didn’t appear until January 26, 1983.
Dan Bricklin made a big mistake when he didn’t try to get a patent on the spreadsheet. After several software patent cases had gone unsuccessfully as far as the U.S. Supreme Court, the general thinking when VisiCalc appeared in 1979 was that software could not be patented, only copyrighted. Like the words of a book, the individual characters of code could be protected by a copyright, and even the specific commands could be protected, but what couldn’t be protected by a copyright was the literal function performed by the program. There is no way that a copyright could protect the idea of a spreadsheet. Protecting the idea would have required a patent.
Ideas are strange stuff. Sure, you could draw up a better mousetrap and get a patent on that, as long as the Patent Office saw the trap design as “new, useful, and unobvious”. A spreadsheet, though, had no physical manifestation other than a particular rhythm of flashing electrons inside a microprocessor. It was that specific rhythm, rather than the actual spreadsheet function it performed, that could be covered by a copyright. Where the patent law seemed to give way was in its apparent failure to accept the idea of a spreadsheet as a virtual machine. VisiCalc was performing work there in the computer, just as a mechanical machine would. It was doing things that could have been accomplished, though far more laboriously, by cams, gears, and sprockets.
In fact, had Dan Bricklin drawn up an idea for a mechanical spreadsheet machine, it would have been patentable, and the patent would have protected not only that particular use for gears and sprockets but also the underlying idea of the spreadsheet. Such a patent would have even protected that idea as it might later be implemented in a computer program. That’s not what Dan Bricklin did, of course, because he was told that software couldn’t be patented. So he got a copyright instead, and the difference to Bricklin between one piece of legal paper and the other was only a matter of several hundred million dollars.
On May 26, 1981, after seven years of legal struggle, S. Pal Asija, a programmer and patent lawyer, received the first software patent for SwiftAnswer, a data retrieval program that was never heard from again and whose only historical function was to prove that all of the experts were wrong; software could be patented. Asija showed that when the Supreme Court had ruled against previous software patent efforts, it wasn’t saying that software was unpatentable but that those particular programs weren’t patentable. By then it was too late for Dan Bricklin. By the time VisiCalc appeared for the IBM PC, Bricklin and Frankston’s spreadsheet was already available for most of the top-selling microcomputers. The IBM PC version of VisiCalc was, in fact, a port of a port, having been translated from a version for the Radio Shack TRS-80 computer, which had been translated originally from the Apple II. VisiCalc was already two years old and a little tired. Here was the IBM PC, with up to 640K of memory available to hold programs and extra features, yet still VisiCalc ran in 64K, with the same old feature set you could get on an Apple II or on a “Trash-80”. It was no longer compelling to the new users coming into the market. They wanted something new.
Part of the reason VisiCalc was available on so many microcomputers was that Dan Fylstra’s company, which had been called Personal Software but by this time was called VisiCorp, wanted out of its contract with Dan Bricklin’s company, Software Arts. VisiCorp had outgrown Fylstra’s back bedroom in Massachusetts and was ensconced in fancier digs out in California, where the action was. But in the midst of all that Silicon Valley action, VisiCorp was hemorrhaging under its deal with Software Arts, which still paid Bricklin and Frankston a 37.5 percent royalty on each copy of VisiCalc sold. VisiCalc sales at one point reached a peak of 30,000 copies per month, and the agreement required VisiCorp to pay Software Arts nearly $12 million in 1983 alone—far more than either side had ever expected.
Fylstra wanted a new deal that would cost his company less, but he had little power to force a change. A deal was a deal, and hackers like Bricklin and Frankston, whose professional lives were based on understanding and following the strict rules of programming, were not inclined to give up their advantage cheaply. The only coercion entitled VisiCorp under the contract, in fact, was its right to demand that Software Arts port VisiCalc to as many different computers as Fylstra liked. So Fylstra made Bricklin port VisiCalc to every microcomputer.
It was clear to both VisiCorp and Software Arts that the 37.5 percent royalty was too high. Today the usual royalty is around 15 percent. Fylstra wanted to own VisiCalc outright, but in two years of negotiations, the two sides never came to terms.
VisiCorp had published other products under the same onerous royalty schedule. One of those products was VisiPlot/Visi-Trend, written by Mitch Kapor and Eric Rosenfield. VisiPlot/ VisiTrend was an add-on to VisiCalc; it could import data from VisiCalc and other programs and then plot the data on graphs and apply statistical tests to determine trends from the data. It was a good program for stock market analysis.
VisiPlot/VisiTrend was derived from an earlier Kapor program written during one of his many stints of graduate work, this time at the Sloan School of Management at MIT. Kapor’s friend Rosenfield was doing his thesis in statistics using an econometric modeling language called TROLL. To help Rosenfield cut his bill for time on the MIT computer system, Kapor wrote a program he called Tiny TROLL, a microcomputer subset of TROLL. Tiny TROLL was later rewritten to read VisiCalc files, which turned the program into VisiPlot/VisiTrend.
VisiCorp, despite its excessive royalty schedule, was still the most successful microcomputer software company of its time. For its most successful companies, the software business is a license to print money. After the costs of writing applications are covered, profit margins run around 90 percent. VisiPlot/VisiTrend, for example, was a $249.95 product, which was sold to distributors for 60 percent off, or $99.98. Kapor’s royalty was 37.5 percent of that, or $37.49 per copy. VisiCorp kept $62.49, out of which the company paid for manufacturing the floppy disks and manuals (probably around $15) and marketing (perhaps $25), still leaving a profit of $22.49. Kapor and Rosenfield earned about $500,000 in royalties for VisiPlot/Visilrend in 1981 and 1982, which was a lot of money for a product originally intended to save money on the Sloan School time-sharing system but less than a tenth of what Dan Bricklin and Bob Frankston were earning for VisiCalc, VisiCorp’s real cash cow. This earnings disparity was not lost on Mitch Kapor.
Kapor learned the software business at VisiCorp. He moved to California for five months to work for Fylstra as a product manager, helping to select and market new products. He saw what was both good and bad about the company and also saw the money that could be made with a compelling application like VisiCalc.
VisiCalc wasn’t the only program that VisiCorp wanted to buy outright in order to get out from under that 37.5 percent royalty. In 1982, Roy Folke, who worked for Fylstra, asked Kapor what it would take to buy VisiPlot/VisiTrend. Kapor first asked for $1 million -- that magic number in the minds of most programmers, since it’s what they always seem to ask for. Then Kapor thought again, realizing that there were other mouths to feed from this sale, other programmers who had helped write the code and deserved to be compensated. The final price was $1.2 million, which sent Mitch Kapor home to Massachusetts with $600,000 after taxes. Only three years before, he had been living in a room in Marv Goldschmitt’s house, wondering what to do with his life, and playing with an Apple II he’d hocked his stereo to buy.
Kapor saw the prototype IBM PC when he was working at VisiCorp. He had a sense that the PC and its PC-DOS operating system would set new standards, creating new edges of opportunity. Back in Boston, he took half his money -- $300,000 -- and bet it on this one-two punch of the IBM PC and PC-DOS. It was a gutsy move at the time because experts were divided about the prospects for success of both products. Some pundits saw real benefits to PC-DOS but nothing very special about IBM’s hardware.
Others thought IBM hardware would be successful, though probably with a more established operating system. Even IBM was hedging its bets by arranging for two other operating systems to support the PC—CP/M-86 and the UCSD p-System. But the only operating system that shipped at the same time as the PC, and the only operating system that had IBM’s name on it, was PC-DOS. That wasn’t lost on Mitch Kapor either.
When riding the edges of technology, there is always a question of how close to the edge to be. By choosing to support only the IBM PC under PC-DOS, Kapor was riding damned close to the edge. If both the computer and its operating system took off, Kapor would be rich beyond anyone’s dreams. If either product failed to become a standard, 1-2-3 would fail; half his fortune and two years of Kapor’s life would have been wasted. Trying to minimize this same risk, other companies adopted more conservative paths. In San Diego, Context Management Systems, for example, was planning an integrated application far more ambitious than Lotus 1-2-3, but just in case IBM and PC-DOS didn’t make it, Context MBA was written under the UCSD p-System.
That lowercase p stands for pseudo. Developed at the University of California at San Diego, the p-System was an operating system intended to work on a wide variety of microprocessors by creating a pseudomachine inside the computer. Rather than writing a program to run on a specific computer like an IBM PC, the idea was to write for this pseudocomputer that existed only in computer memory and ran identically in a number of different computers. The pseudomachine had the same user interface and command set on every computer, whether it was a PC or even a mainframe. While the user programmed the pseudomachine, the pseudomachine programmed the underlying hardware. At least that was the idea.
The p-System gave the same look and feel to several otherwise dissimilar computers, though at the expense of the added pseudomachine translation layer, which made the p-System S-L-O-W -- slow but safe, to the minds of the programmers writing Context MBA, who were convinced that portability would give them a competitive edge. It didn’t.
Context MBA had a giant spreadsheet, far more powerful than VisiCalc. The program also offered data management operations, graphics, and word processing, all within the big spreadsheet. Like Mitch Kapor and Lotus, Context had hopes for success beyond that of mere mortals.
Context MBA appeared six months before 1-2-3 and had more features than the Lotus product. For a while, this worried Kapor and his new partner, Jonathan Sachs, who even made some changes in 1-2-3 after looking at a copy of Context MBA. But their worries were unfounded because the painfully slow performance of Context MBA, with its extended spreadsheet metaphor and p-System overhead, killed both the product and the company. Lotus 1-2-3, on the other hand, was written from the start as a high-performance program optimized strictly for the IBM PC environment.
Sachs was the programmer for 1-2-3, while Kapor called himself the software designer. A software designer in the Mitch Kapor mold is someone who wears Hawaiian shirts and is intensely interested in the details of a program but not necessarily in the underlying algorithms or code. Kapor stopped being a programmer shortly after the time of Tiny TROLL. The roles of Kapor and Sachs in the development of 1-2-3 generally paralleled those of Dan Bricklin and Bob Frankston in the development of VisiCalc. The basis of 1-2-3 was a spreadsheet program for Data General minicomputers already written by Sachs, who had worked at Data General and before that at MIT. Kapor wanted to offer several functions in one program to make 1-2-3 stand out from its competitors, so they came up with the idea of adding graphics and a word processor to Sachs’s original spreadsheet. This way users could crunch their financial data, prepare graphs and diagrams illustrating the results, and package it all in a report prepared with the word processor. It was the word processor, which was being written by a third programmer, that became a bottleneck, holding up the whole project. Then Sachs played with an early copy of Context MBA and discovered that the word processing module of that product was responsible for much of its poor performance, so they decided to drop the word processor module in 1-2-3 and replace it with a simple database manager, which Sachs wrote, retaining the three modules needed to still call it 1-2-3, as planned.
Unlike Context MBA, Lotus 1-2-3 was written entirely in 8088 assembly language, which made it very fast. The program beat the shit out of Multiplan and VisiCalc when it appeared. (Bill Gates, ever unrealistic when it came to assessing the performance of his own products, predicted that Microsoft’s Multiplan would be the death of 1-2-3.) The Lotus product worked only on the PC platform, taking advantage of every part of the hardware. And though the first IBM PCs came with only 16K of onboard memory, 1-2-3 required 256K to run -- more than any other microcomputer program up to that time.
Given that Sachs was writing nearly all the 1-2-3 code under the nagging of Kapor, there has to be some question about where all the money was going. Beyond his own $300,000 investment, Kapor collected more than $3 million in venture capital -- nearly ten times the amount it took to bring the Apple II computer to market.
The money went mainly for creating an organization to sell 1-2-3 and for rolling out the product. Even in 1983, there were thousands of microcomputer software products vying for shelf space in computer stores. Kapor and a team of consultants from McKinsey & Co. decided to avoid competitors entirely by selling 1-2-3 directly to large corporations. They ignored computer stores and computer publications, advertising instead in Time and Newsweek. They spent more than $1 million on mass market advertising for the January 1983 roll-out. Their bold objective was to sell up to $4 million worth of 1-2-3 in the first year. As the sellers of a financial planning package, it must have been embarrassing when they outstripped that first-year goal by 1,700 percent. In the first three months that 1-2-3 was on the market, IBM PC sales tripled. Big Blue had found its compelling application, and Mitch Kapor had found his gold mine.
Lotus sold $53 million worth of 1-2-3 in its first year. By 1984, the company had $157 million in sales and 700 employees. One of the McKinsey consultants, Jim Manzi, took over from Kapor that year as president, developing Lotus even further into a marketing-driven company centered around a sales force four times the size of Microsoft’s, selling direct to Fortune 1000 companies.
As Lotus grew and the thrill of the start-up turned into the drill of a major corporation, Kapor’s interests began to drift. To avoid the imposter label, Kapor felt that he had to follow spectacular success with spectacular success. If 1-2-3 was a big hit, just think how big the next product would be, and the next. A second product was brought out, Symphony, which added word processing and communications functions to 1-2-3. Despite $8 million in roll-out advertising, Symphony was not as big a success as 1-2-3. This had as much to do with the program’s “everything but the kitchen sink” total of 600 commands as it did with the $695 price. After Symphony, Lotus introduced Jazz, an integrated package for the Apple Macintosh that was a clear market failure. Lotus was still dependent on 1-2-3 for 80 percent of its royalties and Kapor was losing confidence.
Microsoft made a bid to buy Lotus in 1984. Bill Gates wanted that direct sales force, he wanted 1-2-3, and he wanted once again to be head of the largest microcomputer software company, since the spectacular growth of Lotus had stolen that distinction from Microsoft. Kapor would become Microsoft’s third-largest stockholder.
“He seemed happy”, said Jon Shirley, who was then president of Microsoft. “We would have made him a ceremonial vice-chairman. Manzi was the one who didn’t like the plan”.
A merger agreement was reached in principle and then canceled when Manzi, who could see no role for himself in the technically oriented and strong-willed hierarchy of Microsoft, talked Kapor out of it.
Meanwhile, Software Arts and VisiCorp had beaten each other to a pulp in a flurry of lawsuits and countersuits. Meeting by accident on a flight to Atlanta in the spring of 1985, Kapor and Dan Bricklin made a deal to sell Software Arts to Lotus, after which VisiCalc was quickly put to death. Now there was no first spreadsheet, only the best one.
Four senior executives left Lotus in 1985, driven out by Manzi and his need to rebuild Lotus in his own image.
“I’m the nicest person I know”, said Manzi.
Then, in July 1986, finding that it was no longer easy and no longer fun, Mitch Kapor resigned suddenly as chairman of Lotus, the company that VisiCalc built.
Twelfth in a series. No look at the rise of the personal computing industry would be complete without a hard look at Bill Gates. Microsoft's cofounder set out to put a PC on every desktop, and pretty much succeeded. "How?" is the question.
Chapter 6 of Robert X. Cringely's 1991 classic Accidental Empires is fascinating reading in context of where Gates and Microsoft are today and what their success might foreshadow for companies leading the charge into the next computing era.
William H. Gates III stood in the checkout line at an all-night convenience store near his home in the Laurelhurst section of Seattle. It was about midnight, and he was holding a carton of butter pecan ice cream. The line inched forward, and eventually it was his turn to pay. He put some money on the counter, along with the ice cream, and then began to search his pockets.
“I’ve got a 50-cents-off coupon here somewhere”, he said, giving up on his pants pockets and moving up to search the pockets of his plaid shin.
The clerk waited, the ice cream melted, the other customers, standing in line with their root beer Slurpies and six-packs of beer, fumed as Gates searched in vain for the coupon.
“Here”, said the next shopper in line, throwing down two quarters.
Gates took the money.
“Pay me back when you earn your first million”, the 7-11 philanthropist called as Gates and his ice cream faded into the night.
The shoppers just shook their heads. They all knew it was Bill Gates, who on that night in 1990 was approximately a three billion dollar man.
I figure there’s some real information in this story of Bill Gates and the ice cream. He took the money. What kind of person is this? What kind of person wouldn’t dig out his own 50 cents and pay for the ice cream? A person who didn’t have the money? Bill Gates has the money. A starving person? Bill Gates has never starved. Some paranoid schizophrenics would have taken the money (some wouldn’t, too), but I’ve heard no claims that Bill Gates is mentally ill. And a kid might take the money -- some bright but poorly socialized kid under, say, the age of 9.
Bingo.
My mother lives in Bentonville, Arkansas, a little town in the northwest part of the state, hard near the four corners of Arkansas, Kansas, Missouri, and Oklahoma. Bentonville holds the headquarters of Wal-Mart stores and is the home of Sam Walton, who founded Wal-Mart. Why we care about this is because Sam Walton is maybe the only person in America who could just write a check and buy out Bill Gates and because my mother keeps running into Sam Walton in the bank.
Sam Walton will act as our control billionaire in this study.
Sam Walton started poor, running a Ben Franklin store in Newport, Arkansas, just after the war. He still drives a pickup truck today and has made his money selling one post hole digger, one fifty-pound bag of dog food, one cheap polyester shirt at a time, but the fact that he’s worth billions of dollars still gives him a lot in common with Bill Gates. Both are smart businessmen, both are highly competitive, both dominate their industries, both have been fairly careful with their money. But Sam Walton is old, and Bill Gates is young. Sam Walton has bone cancer and looks a little shorter on each visit to the bank, while Bill Gates is pouring money into biotechnology companies, looking for eternal youth. Sam Walton has promised his fortune to support education in Arkansas, and Bill Gates’s representatives tell fund raisers from Seattle charities that their boss is still, “too young to be a pillar of his community”.
They’re right. He is too young.
Our fifteen-minutes-of-fame culture makes us all too quickly pin labels of good or bad on public figures. Books like this one paint their major characters in black or white, and sometimes in red. It’s hard to make such generalizations, though, about Bill Gates, who is not really a bad person. In many ways he’s not a particularly good person either. What he is is a young person, and that was originally by coincidence, but now it’s by design. At 36, Gates has gone from being the youngest person to be a self-made billionaire to being the self-made billionaire who acts the youngest.
Spend a late afternoon sitting at any shopping mall. Better still, spend a day at a suburban high school. Watch the white kids and listen to what they say. It’s a shallow world they live in -- one that’s dominated by school and popular culture and by yearning for the opposite sex. Saddam Hussein doesn’t matter unless his name is the answer to a question on next period’s social studies quiz. Music matters. Clothes matter, unless deliberately stating that they don’t matter is part of your particular style. Going to the prom matters. And zits -- zits matter a lot.
Watch these kids and remember when we were that age and everything was so wonderful and horrible and hormones ruled our lives. It’s another culture they live in -- another planet even -- one that we help them to create. On the white kids’ planet, all that is supposed to matter is getting good grades, going to the prom, and getting into the right college. There are no taxes; there is no further responsibility. Steal a car, get caught, and your name doesn’t even make it into the newspaper, because you are a juvenile, a citizen of the white kids’ planet, where even grand theft auto is a two-dimensional act.
Pay attention now, because here comes the important part.
William H. Gates III, who is not a bad person, is two-dimensional too. Girls, cars, and intense competition in a technology business are his life. Buying shirts, taking regular showers, getting married and being a father, becoming a pillar of his community, and just plain making an effort to get along with other people if he doesn’t feel like it are not parts of his life. Those parts belong to someone else -- to his adult alter ego. Those parts still belong his father, William H. Gates II.
In the days before Microsoft, back when Gates was a nerdy Harvard freshman and devoting himself to playing high-stakes poker on a more-or-less full-time basis, his nickname was Trey -- the gambler’s term for a three of any suit. Trey, as in William H. Gates the Trey. His very identity then, as now, was defined in terms of his father. And remember that a trey, while a low card, still beats a deuce.
Young Bill Gates is incredibly competitive because he has a terrific need to win. Give him an advantage, and he’ll take it. Allow him an advantage, and he’ll still take it. Lend him 50 cents and, well, you know …. Those who think he cheats to win are generally wrong. What’s right is that Gates doesn’t mind winning ungracefully. A win is still a win.
It’s clear that if Bill Gates thinks he can’t win, he won’t play. This was true at Harvard, where he considered a career in mathematics until it became clear that there were better undergraduate mathematicians in Cambridge than Bill Gates. And that was true at home in Seattle, where his father, a successful corporate attorney and local big shot, still sets the standard for parenthood, civic responsibility, and adulthood in general.
“There are aspects of his life he’s defaulting on, like being a father”, said the dad, lobbing a backhand in this battle of generations that will probably be played to the death.
So young Bill, opting out of the adulthood contest for now, has devoted his life to pressing his every advantage in a business where his father has no presence and no particular experience. That’s where the odds are on the son’s side and where he’s created a supportive environment with other people much like himself, an environment that allows him to play the stern daddy role and where he will never ever have to grow old.
Bill Gates’s first programming experience came in 1968 at Seattle’s posh Lakeside School when the Mothers’ Club bought the school access to a time-sharing system. That summer, 12-year-old Bill and his friend Paul Allen, who was two years older, made $4,200 writing an academic scheduling program for the school. An undocumented feature of the program made sure the two boys shared classes with the prettiest girls. Later computing adventures for the two included simulating the northwest power grid for the Bonneville Power Administration, which did not know at the time that it was dealing with teenagers, and developing a traffic logging system for the city of Bellevue, Washington.
“Mom, tell them how it worked before”, whined young Bill, seeking his mother’s support in front of prospective clients for Traf-O-Data after the program bombed during an early sales demonstration.
By his senior year in high school. Gates was employed full time as a programmer for TRW -- the only time he has ever had a boss.
Here’s the snapshot view of Bill Gates’s private life. He lives in a big house in Laurelhurst, with an even bigger house under construction nearby. The most important woman in his life is his mother, Mary, a gregarious Junior League type who helps run her son’s life through yellow Post-it notes left throughout his home. Like a younger Hugh Hefner, or perhaps like an emperor of China trapped within the Forbidden City, Gates is not even held responsible for his own personal appearance. When Chairman Bill appears in public with unwashed hair and unkempt clothing, his keepers in Microsoft corporate PR know that they, not Bill, will soon be getting a complaining call from the ever-watchful Mary Gates.
The second most important woman in Bill Gates’s life is probably his housekeeper, with whom he communicates mainly through a personal graphical user interface -- a large white board that sits in Gates’s bedroom. Through check boxes, fill in the blanks, and various icons, Bill can communicate his need for dinner at 8 or for a new pair of socks (brown), all without having to speak or be seen.
Coming from the clothes-are-not-important school of fashion, all of Gates’s clothes are purchased by his mother or his housekeeper.
“He really should have his colors done”, one of the women of Microsoft said to me as we watched Chairman Bill make a presentation in his favorite tan suit and green tie.
Do us all a favor, Bill; ditch the tan suit.
The third most important woman in Bill Gates’s life is the designated girlfriend. She has a name and a face that changes regularly, because nobody can get too close to Bill, who simply will not marry as long as his parents live. No, he didn’t say that. I did.
Most of Gates’s energy is saved for the Boys’ Club -- 212 acres of forested office park in Redmond, Washington, where 10,000 workers wait to do his bidding. Everything there, too, is Bill-centric, there is little or no adult supervision, and the soft drinks are free.
**********
Bill Gates is the Henry Ford of the personal computer industry. He is the father, the grandfather, the uncle, and the godfather of the PC, present at the microcomputer’s birth and determined to be there at its end. Just ask him. Bill Gates is the only head honcho I have met in this business who isn’t angry, and that’s not because he’s any weirder than the others -- each is weird in his own way -- but because he is the only head honcho who is not in a hurry. The others are all trying like hell to get somewhere else before the market changes and their roofs fall in, while Gates is happy right where he is.
Gates and Ford are similar types. Technically gifted, self-centered, and eccentric, they were both slightly ahead of their times and took advantage of that fact. Ford was working on standardization, mass production, and interchangeable parts back when most car buyers were still wealthy enthusiasts, roads were unpaved, and automobiles were generally built by hand. Gates was vowing to put “a computer on every desk and in every home running Microsoft software” when there were fewer than a hundred microcomputers in the world. Each man consciously worked to create an industry out of something that sure looked like a hobby to everyone else.
A list of Ford’s competitors from 1908, when he began mass producing cars at the River Rouge plant, would hold very few names that are still in the car business today. Cadillac, Oldsmobile -- that’s about it. Nearly every other Ford competitor from those days is gone and forgotten. The same can be said for a list of Microsoft competitors from 1975. None of those companies still exists.
Looking through the premier issue of my own rag, InfoWorld, I found nineteen advertisers in that 1979 edition, which was then known as the Intelligent Machines Journal. Of those nineteen advertisers, seventeen are no longer in business. Other than Microsoft, the only survivor is the MicroDoctor -- one guy in Palo Alto who has been repairing computers in the same storefront on El Camino Real since 1978. Believe me, the MicroDoctor, who at this point describes his career as a preferable alternative to living under a bridge somewhere, has never appeared on anyone’s list of Microsoft competitors.
So why are Ford and Microsoft still around when their contemporaries are nearly all gone? Part of the answer has to do with the inevitably high failure rate of companies in new industries; hundreds of small automobile companies were born and died in the first twenty years of this century, and hundreds of small aircraft companies climbed and then power dived in the second twenty years. But an element not to be discounted in this industrial Darwinism is sheer determination. Both Gates and Ford were determined to be long-term factors in their industries. Their objective was to be around fifteen or fifty years later, still calling the shots and running the companies they had started. Most of their competitors just wanted to make money. Both Ford and Gates also worked hard to maintain total control over their operations, which meant waiting as long as possible before selling shares to the public. Ford Motor Co. didn’t go public until nearly a decade after Henry Ford’s death.
Talk to a hundred personal computer entrepreneurs, and ninety-nine of them won’t be able to predict what they will be doing for a living five years from now. This is not because they expect to fail in their current ventures but because they expect to get bored and move on. Nearly every high-tech enterprise is built on the idea of working like crazy for three to five years and then selling out for a vast amount of money. Nobody worries about how the pension plan stacks up because nobody expects to be around to collect a pension. Nobody loses sleep over whether their current business will be a factor in the market ten or twenty years from now --nobody, that is, except Bill Gates, who clearly intends to be as successful in the next century as he is in this one and without having to change jobs to do it.
At 19, Bill Gates saw his life’s work laid out before him. Bill, the self-proclaimed god of software, said in 1975 that there will be a Microsoft and that it will exist for all eternity, selling sorta okay software to the masses until the end of time. Actually, the sorta okay part came along later, and I am sure that Bill intended always for Microsoft’s products to be the best in their fields. But then Ford intended his cars to be best, but he settled, instead, for just making them the most popular. Gates, too, has had to make some compromises to meet his longevity goals for Microsoft.
Both Ford and Gates surrounded themselves with yes-men and -women, whose allegiance is to the leader rather than to the business. Bad idea. It reached the point at Ford where one suddenly out-of-favor executive learned that he was fired when he found his desk had been hacked to pieces with an ax. It’s not like that at Microsoft yet, but emotions do run high, and Chairman Bill is still young.
As Ford did, Gates typically refuses to listen to negative opinions and dismisses negative people from his mind. There is little room for constructive criticism. The need is so great at Microsoft for news to be good that warnings signs are ignored and major problems are often overlooked until it is too late. Planning to enter the PC database market, for example, Microsoft spent millions on a project code-named Omega, which came within a few weeks of shipping in 1990, even though the product didn’t come close to doing what it was supposed to do.
The program manager for Omega, who was so intent on successfully bringing together his enormous project, reported only good news to his superiors when, in fact, there were serious problems with the software. It would have been like introducing a new car that didn’t have brakes or a reverse gear. Cruising toward a major marketplace embarrassment, Microsoft was saved only through the efforts of brave souls who presented Mike Maples, head of Microsoft’s applications division, with a list of promised Omega features that didn’t exist. Maples invited the program manager to demonstrate his product, then asked him to demonstrate each of the non-features. The Omega introduction was cancelled that afternoon.
From the beginning, Bill Gates knew that microcomputers would be big business and that it was his destiny to stand at the center of this growing industry. Software, much more than hardware, was the key to making microcomputers a success, and Gates knew it. He imagined that someday there would be millions of computers on desks and in homes, and he saw Microsoft playing the central role in making this future a reality. His goal for Microsoft in those days was a simple one: monopoly.
“We want to monopolize the software business”, Gates said time and again in the late 1970s. He tried to say it in the 1980s too, but by then Microsoft had public relations people and antitrust lawyers in place to tell their young leader that the M word was not on the approved corporate vocabulary list. But it’s what he meant. Bill Gates had supreme confidence that he knew better than anyone else how software ought to be developed and that his standards would become the de facto standards for the fledgling industry. He could imagine a world in which users would buy personal computers that used Microsoft operating systems, Microsoft languages, and Microsoft applications. In fact, it was difficult, even painful, for Gates to imagine a world organized any other way. He’s a very stubborn guy about such things, to the point of annoyance.
The only problem with this grand vision of future computing -- with Bill Gates setting all the standards, making all the decisions, and monopolizing all the random-access memory in the world -- was that one person alone couldn’t do it. He needed help. In the first few years at Microsoft, when the company had fewer than fifty employees and everyone took turns at the switchboard for fifteen minutes each day, Gates could impose his will by reading all the computer code written by the other programmers and making changes. In fact, he rewrote nearly everything, which bugged the hell out of programmers when they had done perfectly fine work only to have it be rewritten (and not necessarily improved) by peripatetic Bill Gates. As Microsoft grew, though, it became obvious that reading every line and rewriting every other wasn’t a feasible way to continue. Gates needed to find an instrument, a method of governing his creation.
Henry Ford had been able to rule his industrial empire through the instrument of the assembly line. The assembly-line worker was a machine that ate lunch and went home each night to sleep in a bed. On the assembly line, workers had no choice about what they did or how they did it; each acted as a mute extension of Ford’s will. No Model T would go out with four headlights instead of two, and none would be painted a color other than black because two headlights and black paint were what Mr. Ford specified for the cars coming off his assembly line. Bill Gates wanted an assembly line, too, but such a thing had never before been applied to the writing of software.
Writing software is just that -- writing. And writing doesn’t work very well on an assembly line. Novels written by committee are usually not good novels, and computer programs written by large groups usually aren’t very good either. Gates wanted to create an enormous enterprise that would supply most of the world’s microcomputer software, but to do so he had to find a way to impose his vision, his standards, on what he expected would become thousands of programmers writing millions of lines of code -- more than he could ever personally read.
Good programmers don’t usually make good business leaders. Programmers are typically introverted, have awkward social skills, and often aren’t very good about paying their own bills, much less fighting to close deals and get customers to pay up. This ability to be so good at one thing and so bad at another stems mainly, I think, from the fact that programming is an individual sport, where the best work is done, more often than not, just to prove that it can be done rather than to meet any corporate goal.
Each programmer wants to be the best in his crowd, even if that means wanting the others to be not quite so good. This trend, added to the hated burden of meetings and having to care about things like group objectives, morale, and organizational minutiae, can put those bosses who still think of themselves primarily as programmers at odds with the very employees on whom they rely for the overall success of the company. Bill Gates is this way, and his bitter rivalry with nearly every other sentient being on the planet could have been his undoing.
To realize his dream, Gates had to create a corporate structure at Microsoft that would allow him to be both industry titan and top programmer. He had to invent a system that would satisfy his own adolescent need to dominate and his adult need to inspire. How did he do it?
Mind control.
The instrument that allowed Microsoft to grow yet remain under the creative thumb of Bill Gates walked in the door one day in 1979. The instrument’s name was Charles Simonyi.
Unlike most American computer nerds, Charles Simonyi was raised in an intellectually supportive environment that encouraged both thinking and expression. The typical American nerd was a smart kid turned inward, concentrating on science and technology because it was more reliable than the world of adult reality. The nerds withdrew into their own society, which logically excluded their parents, except as chauffeurs and financiers. Bill Gates was the son of a big-shot Seattle lawyer who didn’t understand his kid. But Charles Simonyi grew up in Hungary during the 1950s, the son of an electrical engineering professor who saw problem solving as an integral part of growing up. And problem solving is what computer programming is all about.
In contrast to the parents of most American computer nerds, who usually had little to offer their too-smart sons and daughters, the elder Simonyi managed to play an important role in his son’s intellectual development, qualifying, I suppose, for the Ward Cleaver Award for Quantitative Fathering.
“My father’s rule was to imagine that you have the solution already”, Simonyi remembered. “It is a great way to solve problems. I’d ask him a question: How many horses does it take to do something? And he’d answer right away, ‘Five horses; can you tell me if I am right or wrong?’ By the time I’d figured out that it couldn’t be five, he’d say, ‘Well if it’s not five, then it must be X. Can you solve for that?’ And I could, because the problem was already laid out from the test of whether five horses was correct. Doing it backward removed the anxiety from the answer. The anxiety, of course, is the fear that the problem can’t be solved -- at least not by me”.
With the help of his father, Simonyi became Hungary’s first teenage computer hacker. That’s hacker in the old sense of being a good programmer who has a positive emotional relationship with the machine he is programming. The new sense of hacker -- the Time and Newsweek versions of hackers as technopunks and cyberbandits, tromping through computer systems wearing hobnail boots, leaving footprints, or worse, preying on the innocent data of others -- those hackers aren’t real hackers at all, at least not to me. Go read another book for stories about those people.
Charles Simonyi was a hacker in the purest sense: he slept with his computer. Simonyi’s father helped him get a job as a night watchman when he was 16 years old, guarding the Russian-built Ural II computer at the university. The Ural II had 2,000 vacuum tubes, at least one of which would overheat and burn out each time the computer was turned on. This meant that the first hour of each day was spent finding that burned-out vacuum tube and replacing it. The best way to avoid vacuum tube failure was to leave the computer running all night, so young Simonyi offered to stay up with the computer, guarding and playing with it. Each night, the teenager was in total control of probably half the computing resources in the entire country.
Not that half the computer resources of Hungary were much in today’s terms. The Ural II had 4,000 bytes of memory and took eighty microseconds to add two numbers together. This performance and amount of memory was comparable to an early Apple II. Of course the Ural II was somewhat bigger than an Apple II, filling an entire room. And it had a very different user interface; rather than a video terminal or a stack of punch cards, it used an input device much like an old mechanical cash register. The zeroes and ones of binary machine language were punched on cash register-like mechanical buttons and then entered as a line of data by smashing the big ENTER key on the right side. Click-click-click-click-click-click-click-click—smash!
Months of smashing that ENTER key during long nights spent learning the innards of the Ural II with its hundreds of blinking lights started Simonyi toward a career in computing. By 1966, he had moved to Denmark and was working as a professional programmer on his first computer with transistors rather than vacuum tubes. The Danish system still had no operating system, though. By 1967, Simonyi was an undergraduate computer science student at the University of California, working on a Control Data supercomputer in Berkeley. Still not yet 20, Simonyi had lived and programmed his way through nearly the entire history of von Neumann-type computing, beginning in the time warp that was Hungary.
By the 1970s, Simonyi was the token skinny Hungarian at Xerox PARC, where his greatest achievement was Bravo, the what-you-see-is-what-you-get word processing software for the Alto workstation.
While PARC was the best place in the world to be doing computer science in those days, its elitism bothered Simonyi, who couldn’t seem to (or didn’t want to) shake his socialist upbringing. Remember that at PARC there were no junior researchers, because Bob Taylor didn’t believe in them. Everyone in Taylor’s lab had to be the best in his field so that the Computer Science Lab could continue to produce its miracles of technology while remaining within Taylor’s arbitrary limit of fifty staffers. Simonyi wanted larger staffs, including junior people, and he wanted to develop products that might reach market in the programmer’s lifetime.
PARC technology was amazing, but its lack of reality was equally amazing. For example, one 1978 project, code-named Adam, was a laser-scanned color copier using very advanced emitter-coupled logic semiconductor technology. The project was technically impossible at the time and is only just becoming possible today, more than twelve years later. Since Moore’s Law says that semiconductor density doubles every eighteen months, this means that Adam was undertaken approximately eight generations before it would have been technically viable, which is rather like proposing to invent the airplane in the late sixteenth century. With all the other computer knowledge that needed to be gathered and explored, why anyone would bother with a project like Adam completely escaped Charles Simonyi, who spent lots of time railing against PARC purism and a certain amount of time trying to circumvent it.
This was the case with Bravo. The Alto computer, with its beautiful bit-mapped white-on-black screen, needed software, but there were no extra PARC brains to spare to write programs for it. Money wasn’t a problem, but manpower was; it was almost impossible to hire additional people at the Computer Science Laboratory because of the arduous hiring gauntlet and Taylor’s reluctance to manage extra heads. When heads were added, they were nearly always Ph.D.s, and the problem with Ph.D.s is that they are headstrong; they won’t do what you tell them to. At least they wouldn’t do what Charles Simonyi told them to do. Simonyi did not have a Ph.D.
Simonyi came up with a scam. He proposed a research project to study programmer productivity and how to increase it. In the course of the study, test subjects would be paid to write software under Simonyi’s supervision. The test subjects would be Stanford computer science students. The software they would write was Bravo, Simonyi’s proposed editor for the Alto. By calling them research subjects rather than programmers, he was able to bring some worker bees into PARC.
The Bravo experiment was a complete success, and the word processing program was one of the first examples of software that presented document images on-screen that were identical to the eventual printed output. Beyond Bravo, the scam even provided data for Simonyi’s own dissertation, plunking him right into the ranks of the PARC unmanageable. His 1976 paper was titled “Meta-Programming: A Software Production Method.”
Simonyi’s dissertation was an attempt to describe a more efficient method of organizing programmers to write software. Since software development will always expand to fill all available time (it does not matter how much time is allotted -- software is never early), his paper dealt with how to get more work done in the limited time that is typically available. Looking back at his Bravo experience, Simonyi concluded that simply adding more programmers to the team was not the correct method for meeting a rapidly approaching deadline. Adding more programmers just increased the amount of communication overhead needed to keep the many programmers all working in the same direction. This additional overhead was nearly always enough to absorb any extra manpower, so adding more heads to a project just meant that more money was being spent to reach the same objective at the same time as would have the original, smaller, group. The trick to improving programming productivity was making better use of the programmers already in place rather than adding more programmers. Simonyi’s method of doing this was to create the position of metaprogrammer.
The metaprogrammer was the designer, decision maker, and communication controller in a software development group. As the metaprogrammer on Bravo, Simonyi mapped out the basic design for the editor, deciding what it would look like to the user and what would be the underlying code structure. But he did not write any actual computer code; Simonyi prepared a document that described Bravo in enough detail that his “research subjects” could write the code that brought each feature to life on-screen.
Once the overall program was designed, the metaprogrammer’s job switched to handling communication in the programming group and making decisions. The metaprogrammer was like a general contractor, coordinating all the subcontractor programmers, telling them what to do, evaluating their work in progress, and making any required decisions. Individual programmers were allowed to make no design decisions about the project. All they did was write the code as described by the metaprogrammer, who made all the decisions and made them just as fast as he could, because Simonyi calculated that it was more important for decisions to be made quickly in such a situation than that they be made well. As long as at least 85 percent of the metaprogrammer’s interim decisions were ultimately correct (a percentage Simonyi felt confident that he, at least, could reach more or less on the basis of instinct), there was more to be lost than gained by thoughtful deliberation.
The metaprogrammer also coordinated communication among the individual programmers. Like a telephone operator, the metaprogrammer was at the center of all interprogrammer communication. A programmer with a problem or a question would take it to the metaprogrammer, who could come up with an answer or transfer the question or problem to another programmer who the metaprogrammer felt might have the answer. The alternative was to allow free discussion of the problem, which might involve many programmers working in parallel on the problem, using up too much of the group’s time.
By centralizing design, decision making, and communication in a single metaprogrammer, Simonyi felt that software could be developed more efficiently and faster. The key to the plan’s success, of course, was finding a class of obedient programmers who would not contest the metaprogrammer’s decisions.
The irony in this metaprogrammer concept is that Simonyi, who bitched and moaned so much about the elitism of Xerox PARC, had, in his dissertation, built a vastly more rigid structure that replaced elitism with authoritarianism.
In the fluid structure of Tayloi ‘s lab at PARC, only the elite could survive the demanding intellectual environment. In order to bring junior people into the development organization, Simonyi promoted an elite of one -- the metaprogrammer. Both Taylor’s organization at CSL and Simonyi’s metaprogrammer system had hub and spoke structures, though at CSL, most decision making was distributed to the research groups themselves, which is what made it even possible for Simonyi to perpetrate the scam that produced Bravo. In Simonyi’s system, only the metaprogrammer had the power to decide.
Simonyi, the Hungarian, instinctively chose to emulate the planned economy of his native country in his idealized software development team. Metaprogramming was collective farming of software. But like collective farming, it didn’t work very well.
By 1979, the glamor of Xerox PARC had begun to fade for Simonyi. “For a long while I believed the value we created at PARC was so great, it was worth the losses”, he said. “But in fact, the ideas were good, but the work could be recreated. So PARC was not unique.
“They had no sense of business at all. I remember a PARC lunch when a director (this was after the oil shock) argued that oil has no price elasticity. I thought, ‘What am I doing working here with this Bozo?’”
Many of the more entrepreneurial PARC techno-gods had already left to start or join other ventures. One of the first to go was Bob Metcalfe, the Ethernet guy, who left to become a consultant and then started his own networking company to exploit the potential of Ethernet that he thought was being ignored by Xerox. Planning his own break for the outside world with its bigger bucks and intellectual homogeneity, Simonyi asked Metcalfe whom he should approach about a job in industry. Metcalfe produced a list of ten names, with Bill Gates at the top. Simonyi never got around to calling the other nine.
When Simonyi moved north from California to join Microsoft in 1979, he brought with him two treasures for Bill Gates. First was his experience in developing software applications. There are four types of software in the microcomputer business: operating systems like Gary Kildall’s CP/M, programming languages like Bill Gates’s BASIC, applications like Visi-Calc, and utilities, which are little programs that add extra functions to the other categories. Gates knew a lot about languages, thought he knew a lot about operating systems, had no interest in utilities, but knew very little about applications and admitted it.
The success of VisiCalc, which was just taking off when Simonyi came to Microsoft, showed Gates that application software --spreadsheets, word processors, databases and such -- was one of the categories he would have to dominate in order to achieve his lofty goals for Microsoft. And Simonyi, who was seven years older, maybe smarter, and coming straight from PARC -- Valhalla itself -- brought with him just the expertise that Gates would need to start an applications division at Microsoft. They quickly made a list of products to develop, including a spreadsheet, word processor, database, and a long-since-forgotten car navigation system.
The other treasure that Simonyi brought to Microsoft was his dissertation. Unlike PARC, Microsoft didn’t have any Ph.D.s before Simonyi signed on, so Gates did as much research on the Hungarian as he could, which included having a look at the thesis. Reading through the paper, Gates saw in Simonyi’s metaprogrammer just the instrument he needed to rule a vastly larger Microsoft with as much authority as he then ruled the company in 1979, when it had around fifty employees.
The term metaprogrammer was never used. Gates called it the “software factory”, but what he and Simonyi implemented at Microsoft was a hierarchy of metaprogrammers. Unlike Simonyi’s original vision, Gates’s implementation used several levels of metaprogrammers, which allowed a much larger organization.
Gates was the central metaprogrammer. He made the rules, set the tone, controlled the communications, and made all the technical decisions for the whole company. He surrounded himself with a group of technical leaders called architects. Simonyi was one of these super-nerds, each of whom was given overall responsibility for an area of software development. Each architect was, in turn, a metaprogrammer, surrounded by program managers, the next lower layer of nerd technical management. The programmers who wrote the actual computer code reported to the program managers, who were acting as metaprogrammers, too.
The beauty of the software factory, from Bill Gates’s perspective, was that every participant looked strictly toward the center, and at that center stood Chairman Bill -- a man so determined to be unique in his own organization that Microsoft had more than 500 employees before hiring its second William.
The irony of all this diabolical plotting and planning is that it did not work. It was clear after less than three months that metaprogramming was a failure. Software development, like the writing of books, is an iterative process. You write a program or a part of a program, and it doesn’t work; you improve it, but it still doesn’t work very well; you improve it one more time (or twenty more times), and then maybe it ships to users. With all decisions being made at the top and all information supposedly flowing down from the metaprogrammer to the 22-year-old peon programmers, the reverse flow of information required to make the changes needed for each improved iteration wasn’t planned for. Either the software was never finished, or it was poorly optimized, as was the case with the Xerox Star, the only computer I know of that had its system software developed in this way. The Star was a dog.
The software factory broke down, and Microsoft quickly went back to writing code the same way everyone else did. But the structure of architects and program managers was left in place, with Bill Gates still more or less controlling it all from the center. And since a control structure was all that Chairman Bill had ever really wanted, he at least considered the software factory to be a success.
Through the architects and program managers, Gates was able to control the work of every programmer at Microsoft, but to do so reliably required cheap and obedient labor. Gates set a policy that consciously avoided hiring experienced programmers, specializing, instead, in recent computer science graduates.
Microsoft became a kind of cult. By hiring inexperienced workers and indoctrinating them into a religion that taught the concept that metaprogrammers were better than mere programmers and that Bill Gates, as the metametaprogrammer, was perfect, Microsoft created a system of hero worship that extended Gates’s will into every aspect of the lives of employees he had not even met. It worked for Kim Il Sung in North Korea, and it works in the suburbs east of Seattle too.
Most software companies hire the friends of current employees, but Microsoft hires kids right out of college and relocates them. The company’s appetite for new programming meat is nearly insatiable. One year Microsoft got in trouble with the government of India for hiring nearly every computer science graduate in the country and moving them all to Redmond.
So here are these thousands of neophyte programmers, away from home in their first working situation. All their friends are Microsoft programmers. Bill is a father/folk hero. All they talk about is what Bill said yesterday and what Bill did last week. And since they don’t have much to do except talk about Bill and work, there you find them at 2:00 a.m., writing code between hockey matches in the hallway.
Microsoft programmers work incredibly long hours, most of them unproductive. It’s like a Japanese company where overtime has a symbolic importance and workers stay late, puttering around the office doing little or nothing just because that’s what everyone else does. That’s what Chairman Bill does, or is supposed to do, because the troops rarely even see him I probably see more of Bill Gates than entry-level programmers do.
At Microsoft it’s a “disadvantage” to be married or “have any other priority but work”, according to a middle manager who was unlucky enough to have her secretly taped words later played in court as evidence in a case claiming that Microsoft discriminates against married employees. She described Microsoft as a company where employees were expected to be single or live a “singles lifestyle”, and said the company wanted employees that “ate, breathed, slept, and drank Microsoft,” and felt it was “the best thing in the world.”
The real wonder in this particular episode is not that Microsoft discriminates against married employees, but that the manager involved was a woman. Women have had a hard time working up through the ranks. Only two women have ever made it to the vice-presidential level -- Ida Cole and Jean Richardson. Both were hired away from Apple at a time when Microsoft was coming under federal scrutiny for possible sex discrimination. Richardson lasted a few months in Redmond, while Cole stayed until all her stock options vested, though she was eventually demoted from her job as vice-president.
Like any successful cult, sacrifice and penance and the idea that the deity is perfect and his priests are better than you works at Microsoft. Each level, from Gates on down, screams at the next, goading and humiliating them. And while you can work any eighty hours per week that you want, dress any way that you like, you can’t talk back in a meeting when your boss says you are shit in front of all your co-workers. It just isn’t done. When Bill Gates says that he could do in a weekend what you’ve failed to do in a week or a month, he’s lying, but you don’t know better and just go back to try harder.
This all works to the advantage of Gates, who gets away with murder until the kids eventually realize that this is not the way the rest of the world works. But by then it is three or four years later, they’ve made their contributions to Microsoft, and are ready to be replaced by another group of kids straight out of school.
My secret suspicion is that Microsoft’s cult of personality hides a deep-down fear on Gates’s part that maybe he doesn’t really know it all. A few times I’ve seen him cornered by some techie who is not from Microsoft and not in awe, a techie who knows more about the subject at hand than Bill Gates ever will. I’ve seen a flash of fear in Gates’s eyes then. Even with you or me, topics can range beyond Bill’s grasp, and that’s when he uses his “I don’t know how technical you are” line. Sometimes this really means that he doesn’t want to talk over your head, but just as often it means that he’s the one who really doesn’t know what he’s talking about and is using this put-down as an excuse for changing the subject. To take this particularly degrading weapon out of his hands forever, I propose that should you ever talk with Bill Gates and hear him say, “I don’t know how technical you are,” reply by saying, that you don’t know how technical he is. It will drive him nuts.
The software factory allowed Bill Gates to build and control an enormous software development organization that operates as an extension of himself. The system can produce lots of applications, programming languages, and operating systems on a regular basis and at relatively low cost, but there is a price for this success: the loss of genius. The software factory allows for only a single genius -- Bill Gates. But since Bill Gates doesn’t actually write the code in Microsoft’s software, that means that few flashes of genius make their way into the products. They are derivative -- successful, but derivative. Gates deals with this problem through a massive force of will, telling himself and the rest of the world that commercial success and technical merit are one in the same. They aren’t. He says that Microsoft, which is a superior marketing company, is also a technical innovator. It isn’t.
The people of Microsoft, too, choose to believe that their products are state of the art. Not to do so would be to dispute Chairman Bill, which just is not done. It’s easier to distort reality.
Charles Simonyi accepts Microsoft mediocrity as an inevitable price paid to create a large organization. “The risk of genius is that the products that result from genius often don’t have much to do with each other”, he explained. “We are trying to build core technologies that can be used in a lot of products. That is more valuable than genius.
“True geniuses are very valuable if they are motivated. That’s how you start a company—around a genius. At our stage of growth, it’s not that valuable. The ability to synthesize, organize, and get people to sign off on an idea or project is what we need now, and those are different skills”.
Simonyi started Microsoft’s applications group in 1979, and the first application was, of course, a spreadsheet. Other applications soon followed as Simonyi and Gates built the development organization they knew would be needed when microcomputing finally hit the big time, and Microsoft would take its position ahead of all its competitors. All they had to do was be ready and wait.
In the software business, as in most manufacturing industries, there are inventive organizations and maintenance organizations. Dan Bricklin, who invented VisiCalc, the first spreadsheet, ran an inventive organization. So did Gary Kildall, who developed CP/M, the first microcomputer operating system. Maintenance organizations are those, like Microsoft, that generally produce derivative products -- the second spreadsheet or yet another version of an established programming language. BASIC was, after all, a language that had been placed in the public domain a decade before Bill Gates and Paul Allen decided to write their version for the Altair.
When Gates said, “I want to be the IBM of software”, he consciously wanted to be a monolith. But unconsciously he wanted to emulate IBM, which meant having a reactive strategy, multiple divisions, poor internal communications.
As inventive organizations grow and mature, they often convert themselves into maintenance organizations, dedicated to doing revisions of formerly inventive products and boring as hell for the original programmers who were used to living on adrenalin rushes and junk food. This transition time, from inventive to maintenance, is a time of crisis for these companies and their founders.
Metaprogrammers, and especially nested hierarchies of metaprogrammers, won’t function in inventive organizations, where the troops are too irreverent and too smart to be controlled. But metaprogrammers work just fine at Microsoft, which has never been an inventive organization and so has never suffered the crisis that accompanies that fall from grace when the inventive nerds discover that it’s only a job.
Reprinted with permission
Tenth in a series. Robert X. Cringely's brilliant tome about the rise of the personal computing industry continues, looking at programming languages and operating systems.
Published in 1991, Accidental Empires is an excellent lens for viewing not just the past but future computing.
CHAPTER FOUR
AMATEUR HOUR
You have to wonder what it was we were doing before we had all these computers in our lives. Same stuff, pretty much. Down at the auto parts store, the counterman had to get a ladder and climb way the heck up to reach some top shelf, where he’d feel around in a little box and find out that the muffler clamps were all gone. Today he uses a computer, which tells him that there are three muffler clamps sitting in that same little box on the top shelf. But he still has to get the ladder and climb up to get them, and, worse still, sometimes the computer lies, and there are no muffler clamps at all, spoiling the digital perfection of the auto parts world as we have come to know it.
What we’re often looking for when we add the extra overhead of building a computer into our businesses and our lives is certainty. We want something to believe in, something that will take from our shoulders the burden of knowing when to reorder muffler clamps. In the twelfth century, before there even were muffler clamps, such certainty came in the form of a belief in God, made tangible through the building of cathedrals -- places where God could be accessed. For lots of us today, the belief is more in the sanctity of those digital zeros and ones, and our cathedral is the personal computer. In a way, we’re replacing God with Bill Gates.
Uh-oh.
The problem, of course, is with those zeros and ones. Yes or no, right or wrong, is what those digital bits seem to signify, looking so clean and unconnected that we forget for a moment about that time in the eighth grade when Miss Schwerko humiliated us all with a true-false test. The truth is, that for all the apparent precision of computers, and despite the fact that our mothers and Tom Peters would still like to believe that perfection is attainable in this life, computer and software companies are still remarkably imprecise places, and their products reflect it. And why shouldn’t they, since we’re still at the fumbling stage, where good and bad developments seem to happen at random.
Look at Intel, for example. Up to this point in the story, Intel comes off pretty much as high-tech heaven on earth. As the semiconductor company that most directly begat the personal computer business, Intel invented the microprocessor and memory technologies used in PCs and acted as an example of how a high-tech company should be organized and managed. But that doesn’t mean that Bob Noyce’s crew didn’t screw up occasionally.
There was a time in the early 1980s when Intel suffered terrible quality problems. It was building microprocessors and other parts by the millions and by the millions these parts tested bad. The problem was caused by dust, the major enemy of computer chip makers. When your business relies on printing metallic traces that are only a millionth of an inch wide, having a dust mote ten times that size come rolling across a silicon wafer means that some traces won’t be printed correctly and some parts won’t work at all. A few bad parts are to be expected, since there are dozens, sometimes hundreds, printed on a single wafer, which is later cut into individual components. But Intel was suddenly getting as many bad parts as good, and that was bad for business.
Semiconductor companies fight dust by building their components in expensive clean rooms, where technicians wear surgical masks, paper booties, rubber gloves, and special suits and where the air is specially filtered. Intel had plenty of clean rooms, but it still had a big dust problem, so the engineers cleverly decided that the wafers were probably dusty before they ever arrived at Intel. The wafers were made in the East by Monsanto. Suddenly it was Monsanto’s dust problem.
Monsanto engineers spent months and millions trying to eliminate every last speck of dust from their silicon wafer production facility in South Carolina. They made what they thought was terrific progress, too, though it didn’t show in Intel’s production yields, which were still terrible. The funny thing was that Monsanto’s other customers weren’t complaining. IBM, for example, wasn’t complaining, and IBM was a very picky customer, always asking for wafers that were extra big or extra small or triangular instead of round. IBM was having no dust problems.
If Monsanto was clean and Intel was clean, the only remaining possibility was that the wafers somehow got dusty on their trip between the two companies, so the Monsanto engineers hired a private investigator to tail the next shipment of wafers to Intel. Their private eye uncovered an Intel shipping clerk who was opening incoming boxes of super-clean silicon wafers and then counting out the wafers by hand into piles on a super-unclean desktop, just to make sure that Bob Noyce was getting every silicon wafer he was paying for.
The point of this story goes far beyond the undeification of Intel to a fundamental characteristic of most high-tech businesses. There is a business axiom that management gurus spout and that bigshot industrialists repeat to themselves as a mantra if they want to sleep well at night. The axiom says that when a business grows past $1 billion in annual sales, it becomes too large for any one individual to have a significant impact. Alas, this is not true when it’s a $1 billion high-tech business, where too often the critical path goes right through the head of one particular programmer or engineer or even through the head of a well-meaning clerk down in the shipping department. Remember that Intel was already a $1 billion company when it was brought to its knees by desk dust.
The reason that there are so many points at which a chip, a computer, or a program is dependent on just one person is that the companies lack depth. Like any other new industry, this is one staffed mainly by pioneers, who are, by definition, a small minority. People in critical positions in these organizations don’t usually have backup, so when they make a mistake, the whole company makes a mistake.
My estimate, in fact, is that there are only about twenty-five real people in the entire personal computer industry -- this shipping clerk at Intel and around twenty-four others. Sure, Apple Computer has 10,000 workers, or says it does, and IBM claims nearly 400,000 workers worldwide, but has to be lying. Those workers must be temps or maybe androids because I keep running into the same two dozen people at every company I visit. Maybe it’s a tax dodge. Finish this book and you’ll see; the companies keep changing, but the names are always the same.
Intel begat the microprocessor and the dynamic random access memory chip, which made possible MITS, the first of many personal computer companies with a stupid name. And MITS, in turn, made possible Microsoft, because computer hardware must exist, or at least be claimed to exist, before programmers can even envision software for it. Just as cave dwellers didn’t squat with their flint tools chipping out parking brake assemblies for 1967 Buicks, so programmers don’t write software that has no computer upon which to run. Hardware nearly always leads software, enabling new development, which is why Bill Gates’s conversion from minicomputers to microcomputers did not come (could not come) until 1974, when he was a sophomore at Harvard University and the appearance of the MITS Altair 8800 computer made personal computer software finally possible.
Like the Buddha, Gates’s enlightenment came in a flash. Walking across Harvard Yard while Paul Allen waved in his face the January 1975 issue of Popular Electronics announcing the Altair 8800 microcomputer from MITS, they both saw instantly that there would really be a personal computer industry and that the industry would need programming languages. Although there were no microcomputer software companies yet, 19-year-old Bill’s first concern was that they were already too late. “We realized that the revolution might happen without us”, Gates said. After we saw that article, there was no question of where our life would focus”.
“Our life!” What the heck does Gates mean here -- that he and Paul Allen were joined at the frontal lobe, sharing a single life, a single set of experiences? In those days, the answer was “yes”. Drawn together by the idea of starting a pioneering software company and each convinced that he couldn’t succeed alone, they committed to sharing a single life -- a life unlike that of most other PC pioneers because it was devoted as much to doing business as to doing technology.
Gates was a businessman from the start; otherwise, why would he have been worried about being passed by? There was plenty of room for high-level computer languages to be developed for the fledgling platforms, but there was only room for one first high-level language. Anyone could participate in a movement, but only those with the right timing could control it. Gates knew that the first language -- the one resold by MITS, maker of the Altair -- would become the standard for the whole industry. Those who seek to establish such de facto standards in any industry do so for business reasons.
“This is a very personal business, but success comes from appealing to groups”, Gates says. “Money is made by setting de facto standards”.
The Altair was not much of a consumer product. It came typically as an unassembled $350 kit, clearly targeting only the electronic hobbyist market. There was no software for the machine, so, while it may have existed, it sure didn’t compute. There wasn’t even a keyboard. The only way of programming the computer at first was through entering strings of hexadecimal code by flicking a row of switches on the front panel. There was no display other than some blinking lights. The Altair was limited in its appeal to those who could solder (which eliminated most good programmers) and to those who could program in machine language (which eliminated most good solderers).
BASIC was generally recognized as the easiest programming language to learn in 1975. It automatically converted simple English-like commands to machine language, effectively removing the programming limitation and at least doubling the number of prospective Altair customers.
Since they didn’t have an Altair 8800 computer (nobody did yet), Gates and Allen wrote a program that made a PDP-10 minicomputer at the Harvard Computation Center simulate the Altair’s Intel 8080 microprocessor. In six weeks, they wrote a version of the BASIC programming language that would run on the phantom Altair synthesized in the minicomputer. They hoped it would run on a real Altair equipped with at least 4096 bytes of random access memory. The first time they tried to run the language on a real microcomputer was when Paul Allen demonstrated the product to MITS founder Ed Roberts at the company’s headquarters in Albuquerque. To their surprise and relief, it worked.
MITS BASIC, as it came to be called, gave substance to the microcomputer. Big computers ran BASIC. Real programs had been written in the language and were performing business, educational, and scientific functions in the real world. While the Altair was a computer of limited power, the fact that Allen and Gates were able to make a high-level language like BASIC run on the platform meant that potential users could imagine running these same sorts of applications now on a desktop rather than on a mainframe.
MITS BASIC was dramatic in its memory efficiency and made the bold move of adding commands that allowed programmers to control the computer memory directly. MITS BASIC wasn’t perfect. The authors of the original BASIC, John Kemeny and Thomas Kurtz, both of Dartmouth College, were concerned that Gates and Allen’s version deviated from the language they had designed and placed into the public domain a decade before. Kemeny and Kurtz might have been unimpressed, but the hobbyist world was euphoric.
I’ve got to point out here that for many years Kemeny was president of Dartmouth, a school that didn’t accept me when I was applying to colleges. Later, toward the end of the Age of Jimmy Carter, I found myself working for Kemeny, who was then head of the presidential commission investigating the Three Mile Island nuclear accident. One day I told him how Dartmouth had rejected me, and he said, “College admissions are never perfect, though in your case I’m sure we did the right thing”. After that I felt a certain affection for Bill Gates.
Gates dropped out of Harvard, Allen left his programming job at Honeywell, and both moved to New Mexico to be close to their customer, in the best Tom Peters style. Hobbyists don’t move across country to maintain business relationships, but businessmen do. They camped out in the Sundowner Motel on Route 66 in a neighborhood noted for all-night coffee shops, hookers, and drug dealers.
Gates and Allen did not limit their interest to MITS. They wrote versions of BASIC for other microcomputers as they came to market, leveraging their core technology. The two eventually had a falling out with Ed Roberts of MITS, who claimed that he owned MITS BASIC and its derivatives; they fought and won, something that hackers rarely bothered to do. Capitalists to the bone, they railed against software piracy before it even had a name, writing whining letters to early PC publications.
Gates and Allen started Microsoft with a stated mission of putting “a computer on every desk and in every home, running Microsoft software”. Although it seemed ludicrous at the time, they meant it.
While Allen and Gates deliberately went about creating an industry and then controlling it, they were important exceptions to the general trend of PC entrepreneurism. Most of their eventual competitors were people who managed to be in just the right place at the right time and more or less fell into business. These people were mainly enthusiasts who at first developed computer languages and operating systems for their own use. It was worth the effort if only one person -- the developer himself -- used their product. Often they couldn’t even imagine why anyone else would be interested.
Gary Kildall, for example, invented the first microcomputer operating system because he was tired of driving to work. In the early 1970s, Kildall taught computer science at the Naval Postgraduate School in Monterey, California, where his specialty was compiler design. Compilers are software tools that take entire programs written in a high-level language like FORTRAN or Pascal and translate them into assembly language, which can be read directly by the computer. High-level languages are easier to learn than Assembler, so compilers allowed programs to be completed faster and with more features, although the final code was usually longer than if the program had been written directly in the internal language of the microprocessor. Compilers translate, or compile, large sections of code into Assembler at one time, as opposed to interpreters, which translate commands one at a time.
By 1974, Intel had added the 8008 and 8080 to its family of microprocessors and had hired Gary Kildall as a consultant to write software to emulate the 8080 on a DEC time-sharing system, much as Gates and Allen would shortly do at Harvard. Since there were no microcomputers yet, Intel realized that the best way for companies to develop software for microprocessor-based devices was by using such an emulator on a larger system.
Kildall’s job was to write the emulator, called Interp/80, followed by a high-level language called PL/M, which was planned as a microcomputer equivalent of the XPL language developed for mainframe computers at Stanford University. Nothing so mundane (and useful by mere mortals) as BASIC for Gary Kildall, who had a Ph.D. in compiler design.
What bothered Kildall was not the difficulty of writing the software but the tedium of driving the fifty miles from his home in Pacific Grove across the Santa Cruz mountains to use the Intel minicomputer in Silicon Valley. He could have used a remote teletype terminal at home, but the terminal was incredibly slow for inputting thousands of lines of data over a phone line; driving was faster.
Or he could develop software directly on the 8080 processor, bypassing the time-sharing system completely. Not only could he avoid the long drive, but developing directly on the microprocessor would also bypass any errors in the minicomputer 8080 emulator. The only problem was that the 8080 microcomputer Gary Kildall wanted to take home didn’t exist.
What did exist was the Intellec-8, an Intel product that could be used (sort of) to program an 8080 processor. The Intellec-8 had a microprocessor, some memory, and a port for attaching a Teletype 33 terminal. There was no software and no method for storing data and programs outside of main memory.
The primary difference between the Intellec-8 and a microcomputer was external data storage and the software to control it. IBM had invented a new device, called a floppy disk, to replace punched cards for its minicomputers. The disks themselves could be removed from the drive mechanism, were eight inches in diameter, and held the equivalent of thousands of pages of data. Priced at around $500, the floppy disk drive was perfect for Kildall’s external storage device. KildaU, who didn’t have $500, convinced Shugart Associates, a floppy disk drive maker, to give him a worn-out floppy drive used in its 10,000-hour torture test. While his friend John Torode invented a controller to link the Intellec-8 and the floppy disk drive, Kildall used the 8080 emulator on the Intel time-sharing system to develop his operating system, called CP/M, or Control Program/Monitor.
If a computer acquires a personality, it does so from its operating system. Users interact with the operating system, which interacts with the computer. The operating system controls the flow of data between a computer and its long-term storage system. It also controls access to system memory and keeps those bits of data that are thrashing around the microprocessor from thrashing into each other. Operating systems usually store data in files, which have individual names and characteristics and can be called up as a program or the user requires them.
Gary Kildall developed CP/M on a DEC PDP-10 minicomputer running the TOPS-10 operating system. Not surprisingly, most CP/M commands and file naming conventions look and operate like their TOPS-10-counterparts. It wasn’t pretty, but it did the job.
By the time he’d finished writing the operating system, Intel didn’t want CP/M and had even lost interest in Kildall’s PL/M language. The only customers for CP/M in 1975 were a maker of intelligent terminals and Lawrence Livermore Labs, which used CP/M to monitor programs on its Octopus network.
In 1976, Kildall was approached by Imsai, the second personal computer company with a stupid name. Imsai manufactured an early 8080-based microcomputer that competed with the Altair. In typical early microcomputer company fashion, Imsai had sold floppy disk drives to many of its customers, promising to send along an operating system eventually. With each of them now holding at least $1,000 worth of hardware that was only gathering dust, the customers wanted their operating system, and CP/M was the only operating system for Intel-based computers that was actually available.
By the time Imsai came along, Kildall and Torode had adapted CP/M to four different floppy disk controllers. There were probably 100 little companies talking about doing 8080-based computers, and neither man wanted to invest the endless hours of tedious coding required to adapt CP/M to each of these new platforms. So they split the parts of CP/M that interfaced with each new controller into a separate computer code module, called the Basic Input/Output System, or BIOS. With all the hardware-dependent parts of CP/M concentrated in the BIOS, it became a relatively easy job to adapt the operating system to many different Intel-based microcomputers by modifying just the BIOS.
With his CP/M and invention of the BIOS, Gary Kildall defined the microcomputer. Peek into any personal computer today, and you’ll find a general-purpose operating system adapted to specific hardware through the use of a BIOS, which is now a specialized type of memory chip.
In the six years after Imsai offered the first CP/M computer, more than 500,000 CP/M computers were sold by dozens of makers. Programmers began to write CP/M applications, relying on the operating system’s features to control the keyboard, screen, and data storage. This base of applications turned CP/M into a de facto standard among microcomputer operating systems, guaranteeing its long-term success. Kildall started a company called Intergalactic Digital Research (later, just Digital Research) to sell the software in volume to computer makers and direct to users for $70 per copy. He made millions of dollars, essentially without trying.
Before he knew it, Gary Kildall had plenty of money, fast cars, a couple of airplanes, and a business that made increasing demands on his time. His success, while not unwelcome, was unexpected, which also meant that it was unplanned for. Success brings with it a whole new set of problems, as Gary Kildall discovered. You can plan for failure, but how do you plan for success?
Every entrepreneur has an objective, which, once achieved, leads to a crisis. In Gary Kildall’s case, the objective -- just to write CP/M, not even to sell it -- was very low, so the crisis came quickly. He was a code god, a programmer who literally saw lines of code fully formed in his mind and then committed them effortlessly to the keyboard in much the same way that Mozart wrote music. He was one with the machine; what did he need with seventy employees?
“Gary didn’t give a shit about the business. He was more interested in getting laid”, said Gordon Eubanks, a former student of Kildall who led development of computer languages at Digital Research. “So much went so well for so long that he couldn’t imagine it would change. When it did -- when change was forced upon him -- Gary didn’t know how to handle it.”
“Gary and Dorothy [Kildall's wife and a Digital Research vice-president) had arrogance and cockiness but no passion for products. No one wanted to make the products great. Dan Bricklin [another PC software pionee -- read on] sent a document saying what should be fixed in CP/M, but it was ignored. Then I urged Gary to do a BASIC language to bundle with CP/M, but when we finally got him to do a language, he insisted on PL/i -- a virtually unmarketable language”.
Digital Research was slow in developing a language business to go with its operating systems. It was also slow in updating its core operating system and extending it into the new world of 16-bit microprocessors that came along after 1980. The company in those days was run like a little kingdom, ruled by Gary and Dorothy Kildall.
“In one board meeting”, recalled a former Digital Research executive, “we were talking about whether to grant stock options to a woman employee. Dorothy said, ‘No, she doesn’t deserve options -- she’s not professional enough; her kids visit her at work after 5:00 p.m.’ Two minutes later, Christy Kildall, their daughter, burst into the boardroom and dragged Gary off with her to the stable to ride horses, ending the meeting. Oh yeah, Dorothy knew about professionalism”.
Let’s say for a minute that Eubanks was correct, and Gary Kildall didn’t give a shit about the business. Who said that he had to? CP/M was his invention; Digital Research was his company. The fact that it succeeded beyond anyone’s expectations did not make those earlier expectations invalid. Gary Kildall’s ambition was limited, something that is not supposed to be a factor in American business. If you hope for a thousand and get a million, you are still expected to want more, but he didn’t.
It’s easy for authors of business books to get rankled by characters like Gary Kildall who don’t take good care of the empires they have built. But in fact, there are no absolute rules of behavior for companies like Digital Research. The business world is, like computers, created entirely by people. God didn’t come down and say there will be a corporation and it will have a board of directors. We made that up. Gary Kildall made up Digital Research.
Eubanks, who came to Digital Research after a naval career spent aboard submarines, hated Kildall’s apparent lack of discipline, not understanding that it was just a different kind of discipline. Kildall was into programming, not business.
“Programming is very much a religious experience for a lot of people”, Kildall explained. “If you talk about programming to a group of programmers who use the same language, they can become almost evangelistic about the language. They form a tight-knit community, hold to certain beliefs, and follow certain rules in their programming. It’s like a church with a programming language for a bible”.
Gary Kildall’s bible said that writing a BASIC compiler to go with CP/M might be a shrewd business move, but it would be a step backward technically. Kildall wanted to break new ground, and a BASIC had already been done by Microsoft.
“The unstated rule around Digital Reseach was that Microsoft did languages, while we did operating systems”, Eubanks explained. “It was never stated emphatically, but I always thought that Gary assumed he had an agreement with Bill Gates about this separation and that as long as we didn’t compete with Microsoft, they wouldn’t compete with us”.
Sure.
The Altair 8800 may have been the first microcomputer, but it was not a commercial success. The problem was that assembly took from forty to an infinite number of hours, depending on the hobbyist’s mechanical ability. When the kit was done, the microcomputer either worked or didn’t. If it worked, the owner had a programmable computer with a BASIC interpreter, ready to run any software he felt like writing.
The first microcomputer that was a major commercial success was the Apple II. It succeeded because it was the first microcomputer that looked like a consumer electronic product. You could buy the Apple from a dealer who would fix it if it broke and would give you at least a little help in learning to operate the beast. The Apple II had a floppy disk drive for data storage, did not require a separate Teletype or video terminal, and offered color graphics in addition to text. Most important, you could buy software written by others that would run on the Apple and with which a novice could do real work.
The Apple II still defines what a low-end computer is like. Twenty-third century archaeologists excavating some ancient ComputerLand stockroom will see no significant functional difference between an Apple II of 1978 and an IBM PS/2 of 1992. Both have processor, memory, storage, and video graphics. Sure, the PS/2 has a faster processor, more memory and storage, and higher-resolution graphics, but that only matters to us today. By the twenty-third century, both machines will seem equally primitive.
The Apple II was guided by three spirits. Steve Wozniak invented the earlier Apple I to show it off to his friends in the Homebrew Computer Club. Steve Jobs was Wozniak’s younger sidekick who came up with the idea of building computers for sale and generally nagged Woz and others until the Apple II was working to his satisfaction. Mike Markkula was the semiretired Intel veteran (and one of Noyce’s boys) who brought the money and status required for the other two to be taken at all seriously.
Wozniak made the Apple II a simple machine that used clever hardware tricks to get good performance at a smallish price (at least to produce -- the retail price of a fully outfitted Apple II was around $3,000). He found a way to allow the microprocessor and the video display to share the same memory. His floppy disk controller, developed during a two-week period in December 1977, used less than a quarter the number of integrated circuits required by other controllers at the time. The Apple’s floppy disk controller made it clearly superior to machines appearing about the same time from Commodore and Radio Shack. More so than probably any other microcomputer, the Apple II was the invention of a single person; even Apple’s original BASIC interpreter, which was always available in readonly memory, had been written by Woz.
Woz made the Apple II a color machine to prove that he could do it and so he could use the computer to play a color version of Breakout, a video game that he and Jobs had designed for Atari. Markkula, whose main contributions at Intel had been in finance, pushed development of the floppy disk drive so the computer could be used to run accounting programs and store resulting financial data for small business owners. Each man saw the Apple II as a new way of fulfilling an established need -- to replace a video game for Woz and a mainframe for Markkula. This followed the trend that new media tend to imitate old media.
Radio began as vaudeville over the air, while early television was radio with pictures. For most users (though not for Woz) the microcomputer was a small mainframe, which explained why Apple’s first application for the machine was an accounting package and the first application supplied by a third-party developer was a database -- both perfect products for a mainframe substitute. But the Apple II wasn’t a very good mainframe replacement. The fact is that new inventions often have to find uses of their own in order to find commercial success, and this was true for the Apple II, which became successful strictly as a spreadsheet machine, a function that none of its inventors visualized.
At $3,000 for a fully configured system, the Apple II did not have a big future as a home machine. Old-timers like to reminisce about the early days of Apple when the company’s computers were affordable, but the truth is that they never were.
The Apple II found its eventual home in business, answering the prayers of all those middle managers who had not been able to gain access to the company’s mainframe or who were tired of waiting the six weeks it took for the computer department to prepare a report, dragging the answers to simple business questions from corporate data. Instead, they quickly learned to use a spreadsheet program called VisiCalc, which was available at first only on the Apple II.
VisiCalc was a compelling application -- an application so important that it, alone justified the computer purchase. Such an application was the last element required to turn the microcomputer from a hobbyist’s toy into a business machine. No matter how powerful and brilliantly designed, no computer can be successful without a compelling application. To the people who bought them, mainframes were really inventory machines or accounting machines, and minicomputers were office automation machines. The Apple II was a VisiCalc machine.
VisiCalc was a whole new thing, an application that had not appeared before on some other platform. There were no minicomputer or mainframe spreadsheet programs that could be downsized to run on a microcomputer. The microcomputer and the spreadsheet came along at the same time. They were made for each other.
VisiCalc came about because its inventor, Dan Bricklin, went to business school. And Bricklin went to business school because he thought that his career as a programmer was about to end; it was becoming so easy to write programs that Bricklin was convinced there would eventually be no need for programmers at all, and he would be out of a job. So in the fall of 1977, 26 years old and worried about being washed up, he entered the Harvard Business School looking toward a new career.
At Harvard, Bricklin had an advantage over other students. He could whip up BASIC programs on the Harvard time-sharing system that would perform financial calculations. The problem with Bricklin’s programs was that they had to be written and rewritten for each new problem. He began to look for a more general way of doing these calculations in a format that would be flexible.
What Bricklin really wanted was not a microcomputer program at all but a specialized piece of hardware -- a kind of very advanced calculator with a heads-up display similar to the weapons system controls on an F-14 fighter. Like Luke Skywalker jumping into the turret of the Millennium Falcon, Bricklin saw himself blasting out financials, locking onto profit and loss numbers that would appear suspended in space before him. It was to be a business tool cum video game, a Saturday Night Special for M.B.A.s, only the hardware technology didn’t exist in those days to make it happen.
Back in the semireal world of the Harvard Business School, Bricklin’s production professor described large blackboards that were used in some companies for production planning. These blackboards, often so long that they spanned several rooms, were segmented in a matrix of rows and columns. The production planners would fill each space with chalk scribbles relating to the time, materials, manpower, and money needed to manufacture a product. Each cell on the blackboard was located in both a column and a row, so each had a two-dimensional address. Some cells were related to others, so if the number of workers listed in cell C-3 was increased, it meant that the amount of total wages in cell D-5 had to be increased proportionally, as did the total number of items produced, listed in cell F-7. Changing the value in one cell required the recalculation of values in all other linked cells, which took a lot of erasing and a lot of recalculating and left the planners constantly worried that they had overlooked recalculating a linked value, making their overall conclusions incorrect.
Given that Bricklin’s Luke Skywalker approach was out of the question, the blackboard metaphor made a good structure for Bricklin’s financial calculator, with a video screen replacing the physical blackboard. Once data and formulas were introduced by the user into each cell, changing one variable would automatically cause all the other cells to be recalculated and changed too. No linked cells could be forgotten. The video screen would show a window on a spreadsheet that was actually held in computer memory. The virtual spreadsheet inside the box could be almost any size, putting on a desk what had once taken whole rooms filled with blackboards. Once the spreadsheet was set up, answering a what-if question like “How much more money will we make if we raise the price of each widget by a dime?” would take only seconds.
His production professor loved the idea, as did Bricklin’s accounting professor. Bricklin’s finance professor, who had others to do his computing for him, said there were already financial analysis programs running on mainframes, so the world did not need Dan Bricklin’s little program. Only the world did need Dan Bricklin’s little program, which still didn’t have a name.
It’s not surprising that VisiCalc grew out of a business school experience because it was the business schools that were producing most of the future VisiCalc users. They were the thousands of M.B.A.s who were coming into the workplace trained in analytical business techniques and, even more important, in typing. They had the skills and the motivation but usually not the access to their company computer. They were the first generation of businesspeople who could do it all by themselves, given the proper tools.
Bricklin cobbled up a demonstration version of his idea over a weekend. It was written in BASIC, was slow, and had only enough rows and columns to fill a single screen, but it demonstrated many of the basic functions of the spreadsheet. For one thing, it just sat there. This is the genius of the spreadsheet; it’s event driven. Unless the user changes a cell, nothing happens. This may not seem like much, but being event driven makes a spreadsheet totally responsive to the user; it puts the user in charge in a way that most other programs did not. VisiCalc was a spreadsheet language, and what the users were doing was rudimentary programming, without the anxiety of knowing that’s what it was.
By the time Bricklin had his demonstration program running, it was early 1978 and the mass market for microcomputers, such as it was, was being vied for by the Apple II, Commodore PET, and the Radio Shack TRS-80. Since he had no experience with micros, and so no preference for any particular machine, Bricklin and Bob Frankston, his old friend from MIT and new partner, developed VisiCalc for the Apple II, strictly because that was the computer their would-be publisher loaned them in the fall of 1978. No technical merit was involved in the decision.
Dan Fylstra was the publisher. He had graduated from Harvard Business School a year or two before and was trying to make a living selling microcomputer chess programs from his home. Fylstra’s Personal Software was the archetypal microcomputer application software company. Bill Gates at Microsoft and Gary Kildall at Digital Research were specializing in operating systems and languages, products that were lumped together under the label of systems software, and were mainly sold to hardware manufacturers rather than directly to users. But Fylstra was selling applications direct to retailers and end users, often one program at a time. With no clear example to follow, he had to make most of the mistakes himself, and did.
Since there was no obvious success story to emulate, no retail software company that had already stumbled across the rules for making money, Fylstra dusted off his Harvard case study technique and looked for similar industries whose rules could be adapted to the microcomputer software biz. About the closest example he could find was book publishing, where the author accepts responsibility for designing and implementing the product, and the publisher is responsible for manufacturing, distribution, marketing, and sales. Transferred to the microcomputer arena, this meant that Software Arts, the company Bricklin and Frankston formed, would develop VisiCalc and its subsequent versions, while Personal Software, Fylstra’s company, would copy the floppy disks, print the manuals, place ads in computer publications, and distribute the product to retailers and the public. Software Arts would receive a royalty of 37.5 percent on copies of VisiCalc sold at retail and 50 percent for copies sold wholesale. “The numbers seemed fair at the time,” Fylstra said.
Bricklin was still in school, so he and Frankston divided their efforts in a way that would become a standard for microcomputer programming projects. Bricklin designed the program, while Frankston wrote the actual code. Bricklin would say, “This is the way the program is supposed to look, these are the features, and this is the way it should function”, but the actual design of the internal program was left up to Bob Frankston, who had been writing software since 1963 and was clearly up to the task. Frankston added a few features on his own, including one called “lookup”, which could extract values from a table, so he could use VisiCalc to do his taxes.
Bob Frankston is a gentle man and a brilliant programmer who lives in a world that is just slightly out of sync with the world in which you and I live. (Okay, so it’s out of sync with the world in which you live.) When I met him, Frankston was chief scientist at Lotus Development, the people who gave us the Lotus 1-2-3 spreadsheet. In a personal computer hardware or software company, being named chief scientist means that the boss doesn’t know what to do with you. Chief scientists don’t generally have to do anything; they’re just smart people whom the company doesn’t want to lose to a competitor. So they get a title and an office and are obliged to represent the glorious past at all company functions. At Apple Computer, they call them Apple Fellows, because you can’t have more than one chief scientist.
Bob Frankston, a modified nerd (he combined the requisite flannel shirt with a full beard), seemed not to notice that his role of chief scientist was a sham, because to him it wasn’t; it was the perfect opportunity to look inward and think deep thoughts without regard to their marketability.
“Why are you doing this as a book?” Frankston asked me over breakfast one morning in Newton, Massachusetts. By “this”, he meant the book you have in your hands right now, the major literary work of my career and, I hope, the basis of an important American fortune. “Why not do it as a hypertext file that people could just browse through on their computers?”
I will not be browsed through. The essence of writing books is the author’s right to tell the story in his own words and in the order he chooses. Hypertext, which allows an instant accounting of how many times the words Dynamic Random-Access Memory or fuck appear, completely eliminates what I perceive as my value-added, turns this exercise into something like the Yellow Pages, and totally eliminates the prospect that it will help fund my retirement.
“Oh”, said Frankston, with eyebrows raised. “Okay”.
Meanwhile, back in 1979, Bricklin and Frankston developed the first version of VisiCalc on an Apple II emulator running on a minicomputer, just as Microsoft BASIC and CP/M had been written. Money was tight, so Frankston worked at night, when computer time was cheaper and when the time-sharing system responded faster because there were fewer users.
They thought that the whole job would take a month, but it took close to a year to finish. During this time, Fylstra was showing prerelease versions of the product to the first few software retailers and to computer companies like Apple and Atari. Atari was interested but did not yet have a computer to sell. Apple’s reaction to the product was lukewarm.
VisiCalc hit the market in October 1979, selling for $100. The first 100 copies went to Marv Goldschmitt’s computer store in Bedford, Massachusetts, where Dan Bricklin appeared regularly to give demonstrations to bewildered customers. Sales were slow. Nothing like this product had existed before, so it would be a mistake to blame the early microcomputer users for not realizing they were seeing the future when they stared at their first VisiCalc screen.
Nearly every software developer in those days believed that small businesspeople would be the main users of any financial products they’d develop. Markkula’s beloved accounting system, for example, would be used by small retailers and manufacturers who could not afford access to a time-sharing system and preferred not to farm the job out to an accounting service. Bricklin’s spreadsheet would be used by these same small businesspeople to prepare budgets and forecast business trends. Automation was supposed to come to the small business community through the microcomputer just as it had come to the large and medium businesses through mainframes and minicomputers. But it didn’t work that way.
The problem with the small business market was that small businesses weren’t, for the most part, very businesslike. Most small businesspeople didn’t know what they were doing. Accounting was clearly beyond them.
At the time, sales to hobbyists and would-be computer game players were topping out, and small businesses weren’t buying. Apple and most of its competitors were in real trouble. The personal computer revolution looked as if it might last only five /ears. But then VisiCalc sales began to kick in.
Among the many customers who watched VisiCalc demos at Marv Goldschmitt’s computer store were a few businesspeople -- rare members of both the set of computer enthusiasts and the economic establishment. Many of these people had bought Apple lis, hoping to do real work until they attempted to come to terms with the computer’s forty-column display and lack of lowercase letters. In VisiCalc, they found an application that did not care about lowercase letters, and since the program used a view through the screen on a larger, virtual spreadsheet, the forty-column limit was less of one. For $100, they took a chance, carried the program home, then eventually took both the program and the computer it ran on with them to work. The true market for the Apple II turned out to be big business, and it was through the efforts of enthusiast employees, not Apple marketers, that the Apple II invaded industry.
“The beautiful thing about the spreadsheet was that customers in big business were really smart and understood the benefits right away”, said Trip Hawkins, who was in charge of small business strategy at Apple. “I visited Westinghouse in Pittsburgh. The company had decided that Apple II technology wasn’t suitable, but 1,000 Apple lis had somehow arrived in the corporate headquarters, bought with petty cash funds and popularized by the office intelligentsia”.
Hawkins was among the first to realize that the spreadsheet was a new form of computer life and that VisiCalc -- the only spreadsheet on the market and available at first only on the Apple II -- would be Apple’s tool for entering, maybe dominating, the microcomputer market for medium and large corporations. VisiCalc was a strategic asset and one that had to be tied up fast before Bricklin and Frankston moved it onto other platforms like the Radio Shack TRS-80.
“When I brought the first copies of VisiCalc into Apple, it was clear to me that this was an important application, vital to the success of the Apple II”, Hawkins said. “We didn’t want it to appear on the Radio Shack or on the IBM machine we knew was coming, so I took Dan Fylstra to lunch and talked about a buyout. The price we settled on would have been $1 million worth of Apple stock, which would have been worth much more later. But when I took the deal to Markkula for approval, he said, ‘No, it’s too expensive’”.
A million dollars was an important value point in the early microcomputer software business. Every programmer who bothered to think about money at all looked toward the time when he would sell out for a cool million. Apple could have used ownership of the program to dominate business microcomputing for years. The deal would have been good, too, for Dan Fylstra, who so recently had been selling chess programs out of his apartment. Except that Dan Fylstra didn’t own VisiCalc -- Dan Bricklin and Bob Frankston did. The deal came and went without the boys in Massachusetts even being told.
Reprinted with permission
Ninth in a series. Robert X. Cringely's brilliant look at the rise of the personal computing industry continues, explaining why PCs aren't mini-mainframes and share little direct lineage with them.
Published in 1991, Accidental Empires is an excellent lens for viewing not just the past but future computing.
ACCIDENTAL EMPIRES — CHAPTER THREE
WHY THEY DON’T CALL IT COMPUTER VALLEY
Reminders of just how long I’ve been around this youth-driven business keep hitting me in the face. Not long ago I was poking around a store called the Weird Stuff Warehouse, a sort of Silicon Valley thrift shop where you can buy used computers and other neat junk. It’s right across the street from Fry’s Electronics, the legendary computer store that fulfills every need of its techie customers by offering rows of junk food, soft drinks, girlie magazines, and Maalox, in addition to an enormous selection of new computers and software. You can’t miss Fry’s; the building is painted to look like a block-long computer chip. The front doors are labeled Enter and Escape, just like keys on a computer keyboard.
Weird Stuff, on the other side of the street, isn’t painted to look like anything in particular. It’s just a big storefront filled with tables and bins holding the technological history of Silicon Valley. Men poke through the ever-changing inventory of junk while women wait near the door, rolling their eyes and telling each other stories about what stupid chunk of hardware was dragged home the week before.
Next to me, a gray-haired member of the short-sleeved sport shirt and Hush Puppies school of 1960s computer engineering was struggling to drag an old printer out from under a table so he could show his 8-year-old grandson the connector he’d designed a lifetime ago. Imagine having as your contribution to history the fact that pin 11 is connected to a red wire, pin 18 to a blue wire, and pin 24 to a black wire.
On my own search for connectedness with the universe, I came across a shelf of Apple III computers for sale for $100 each. Back in 1979, when the Apple III was still six months away from being introduced as a $3,000 office computer, I remember sitting in a movie theater in Palo Alto with one of the Apple III designers, pumping him for information about it.
There were only 90,000 Apple III computers ever made, which sounds like a lot but isn’t. The Apple III had many problems, including the fact that the automated machinery that inserted dozens of computer chips on the main circuit board didn’t push them into their sockets firmly enough. Apple’s answer was to tell 90,000 customers to pick up their Apple III carefully, hold it twelve to eighteen inches above a level surface, and then drop it, hoping that the resulting crash would reseat all the chips.
Back at the movies, long before the Apple Ill’s problems, or even its potential, were known publicly, I was just trying to get my friend to give me a basic description of the computer and its software. The film was Barbarella, and all I can remember now about the movie or what was said about the computer is this image of Jane Fonda floating across the screen in simulated weightlessness, wearing a costume with a clear plastic midriff. But then the rest of the world doesn’t remember the Apple III at all.
It’s this relentless throwing away of old technology, like the nearly forgotten Apple III, that characterizes the personal computer business and differentiates it from the business of building big computers, called mainframes, and minicomputers. Mainframe technology lasts typically twenty years; PC technology dies and is reborn every eighteen months.
There were computers in the world long before we called any of them “personal”. In fact, the computers that touched our lives before the mid-1970s were as impersonal as hell. They sat in big air-conditioned rooms at insurance companies, phone companies, and the IRS, and their main function was to screw up our lives by getting us confused with some other guy named Cringely, who was a deadbeat, had a criminal record, and didn’t much like to pay parking tickets. Computers were instruments of government and big business, and except for the punched cards that came in the mail with the gas bill, which we were supposed to return obediently with the money but without any folds, spindling, or mutilation, they had no physical presence in our lives.
How did we get from big computers that lived in the basement of office buildings to the little computers that live on our desks today? We didn’t. Personal computers have almost nothing to do with big computers. They never have.
A personal computer is an electronic gizmo that is built in a factory and then sold by a dealer to an individual or a business. If everything goes as planned, the customer will be happy with the purchase, and the company that makes the personal computer, say Apple or Compaq, won’t hear from that customer again until he or she buys another computer. Contrast that with the mainframe computer business, where big computers are built in a factory, sold directly to a business or government, installed by the computer maker, serviced by the computer maker (for a monthly fee), financed by the computer maker, and often running software written by the computer maker (and licensed, not sold, for another monthly fee). The big computer company makes as much money from servicing, financing, and programming the computer as it does from selling it. It not only wants to continue to know the customer, it wants to be in the customer’s dreams.
The only common element in these two scenarios is the factory. Everything else is different. The model for selling personal computers is based on the idea that there are millions of little customers out there; the model for selling big computers has always been based on the idea that there are only a few large customers.
When IBM engineers designed the System 650 mainframe in the early 1950s, their expectation was to build fifty in all, and the cost structure that was built in from the start allowed the company to make a profit on only fifty machines. Of course, when computers became an important part of corporate life, IBM found itself selling far more than fifty -- 1,500, in fact -- with distinct advantages of scale that brought gross profit margins up to the 60 to 70 percent range, a range that computer companies eventually came to expect. So why bother with personal computers?
Big computers and little computers are completely different beasts created by radically different groups of people. It’s logical, I know, to assume that the personal computer came from shrinking a mainframe, but that’s not the way it happened. The PC business actually grew up from the semiconductor industry. Instead of being a little mainframe, the PC is, in fact, more like an incredibly big chip. Remember, they don’t call it Computer Valley. They call it Silicon Valley, and it’s a place that was invented one afternoon in 1957 when Bob Noyce and seven other engineers quit en masse from Shockley Semiconductor.
William Shockley was a local boy and amateur magician who had gone on to invent the transistor at Bell Labs in the late 1940s and by the mid-1950s was on his own building transistors in what had been apricot drying sheds in Mountain View, California.
Shockley was a good scientist but a bad manager. He posted a list of salaries on the bulletin board, pissing off those who were being paid less for the same work. When the work wasn’t going well, he blamed sabotage and demanded lie detector tests. That did it. Just weeks after they’d toasted Shockley’s winning the Nobel Prize in physics by drinking champagne over breakfast at Dinah’s Shack, a red clapboard restaurant on El Camino Real, the “Traitorous Eight”, as Dr. S. came to call them, hit the road.
For Shockley, it was pretty much downhill from there; today he’s remembered more for his theories of racial superiority and for starting a sperm bank for geniuses in the 1970s than for the breakthrough semiconductor research he conducted in the 1940s and 1950s. (Of course, with several fluid ounces of Shockley semen still sitting on ice, we may not have heard the last of the doctor yet.)
Noyce and the others started Fairchild Semiconductor, the archetype for every Silicon Valley start-up that has followed. They got the money to start Fairchild from a young investment banker named Arthur Rock, who found venture capital for the firm. This is the pattern that has been followed ever since as groups of technical types split from their old companies, pick up venture capital to support their new idea, and move on to the next start-up. More than fifty new semiconductor companies eventually split off in this way from Fairchild alone.
At the heart of every start-up is an argument. A splinter group inside a successful company wants to abandon the current product line and bet the company on some radical new technology. The boss, usually the guy who invented the current technology, thinks this idea is crazy and says so, wishing the splinter group well on their new adventure. If he’s smart, the old boss even helps his employees to leave by making a minority investment in their new company, just in case they are among the 5 percent of start-ups that are successful.
The appeal of the start-up has always been that it’s a small operation, usually led by the smartest guy in the room but with the assistance of all players. The goals of the company are those of its people, who are all very technically oriented. The character of the company matches that of its founders, who were inevitably engineers—regular guys. Noyce was just a preacher’s kid from Iowa, and his social sensibilities reflected that background.
There was no social hierarchy at Fairchild -- no reserved parking spaces or executive dining rooms -- and that remained true even later when the company employed thousands of workers and Noyce was long gone. There was no dress code. There were hardly any doors; Noyce had an office cubicle, built from shoulder-high partitions, just like everybody else. Thirty years later, he still had only a cubicle, along with limitless wealth.
They use cubicles, too, at Hewlett-Packard, which at one point in the late 1970s had more than 50,000 employees, but only three private offices. One office belonged to Bill Hewlett, one to David Packard, and the third to a guy named Paul Ely, who annoyed so many coworkers with his bellowing on the telephone that the company finally extended his cubicle walls clear to the ceiling. It looked like a freestanding elevator shaft in the middle of a vast open office.
The Valley is filled with stories of Bob Noyce as an Everyman with deep pockets. There was the time he stood in a long line at his branch bank and then asked the teller for a cashier’s check for $1.3 million from his personal savings, confiding gleefully that he was going to buy a Learjet that afternoon. Then, after his divorce and remarriage, Noyce tried to join the snobbish Los Altos Country Club, only to be rejected because the club did not approve of his new wife, so he wrote another check and simply duplicated the country club facilities on his own property, within sight of the Los Altos clubhouse. “To hell with them,” he said.
As a leader, Noyce was half high school science teacher and half athletic team captain. Young engineers were encouraged to speak their minds, and they were given authority to buy whatever they needed to pursue their research. No idea was too crazy to be at least considered, because Noyce realized that great discoveries lay in crazy ideas and that rejecting out of hand the ideas of young engineers would just hasten that inevitable day when they would take off for their own start-up.
While Noyce’s ideas about technical management sound all too enlightened to be part of anything called big business, they worked well at Fairchild and then at Noyce’s next creation, Intel. Intel was started, in fact, because Noyce couldn’t get Fairchild’s eastern owners to accept the idea that stock options should be a part of compensation for all employees, not just for management. He wanted to tie everyone, from janitors to bosses, into the overall success of the company, and spreading the wealth around seemed the way to go.
This management style still sets the standard for every computer, software, and semiconductor company in the Valley today, where office doors are a rarity and secretaries hold shares in their company’s stock. Some companies follow the model well, and some do it poorly, but every CEO still wants to think that the place is being run the way Bob Noyce would have run it.
The semiconductor business is different from the business of building big computers. It costs a lot to develop a new semiconductor part but not very much to manufacture it once the design is proved. This makes semiconductors a volume business, where the most profitable product lines are those manufactured in the greatest volume rather than those that can be sold in smaller quantities with higher profit margins. Volume is everything.
To build volume, Noyce cut all Fairchild components to a uniform price of one dollar, which was in some cases not much more than the cost of manufacturing them. Some of Noyce’s partners thought he was crazy, but volume grew quickly, followed by profits, as Fairchild expanded production again and again to meet demand, continually cutting its cost of goods at the same time. The concept of continually dropping electronic component prices was born at Fairchild. The cost per transistor dropped by a factor of 10,000 over the next thirty years.
To avoid building a factory that was 10,000 times as big, Noyce came up with a way to give customers more for their money while keeping the product price point at about the same level as before. While the cost of semiconductors was ever falling, the cost of electronic subassemblies continued to increase with the inevitably rising price of labor. Noyce figured that even this trend could be defeated if several components could be built together on a single piece of silicon, eliminating much of the labor from electronic assembly. It was 1959, and Noyce called his idea an integrated circuit. “I was lazy,” he said. “It just didn’t make sense to have people soldering together these individual components when they could be built as a single part.”
Jack Kilby at Texas Instruments had already built several discrete components on the same slice of germanium, including the first germanium resistors and capacitors, but Kilby’s parts were connected together on the chip by tiny gold wires that had to be installed by hand. TI’s integrated circuit could not be manufactured in volume.
The twist that Noyce added was to deposit a layer of insulating silicon oxide on the top surface of the chip—this was called the “planar process” that had been invented earlier at Fairchild —and then use a photographic process to print thin metal lines on top of the oxide, connecting the components together on the chip. These metal traces carried current in the same way that Jack Kilby’s gold wires did, but they could be printed on in a single step rather than being installed one at a time by hand.
Using their new photolithography method, Noyce and his boys put first two or three components on a single chip, then ten, then a hundred, then thousands. Today the same area of silicon that once held a single transistor can be populated with more than a million components, all too small to be seen.
Tracking the trend toward ever more complex circuits, Gordon Moore, who cofounded Intel with Noyce, came up with Moore’s Law: the number of transistors that can be built on the same size piece of silicon will double every eighteen months. Moore’s Law still holds true. Intel’s memory chips from 1968 held 1,024 bits of data; the most common memory chips today hold a thousand times as much -- 1,024,000 bits -- and cost about the same.
The integrated circuit -- the IC -- also led to a trend in the other direction -- toward higher price points, made possible by ever more complex semiconductors that came to do the work of many discrete components. In 1971, Ted Hoff at Intel took this trend to its ultimate conclusion, inventing the microprocessor, a single chip that contained most of the logic elements used to make a computer. Here, for the first time, was a programmable device to which a clever engineer could add a few memory chips and a support chip or two and turn it into a real computer you could hold in your hands. There was no software for this new computer, of course -- nothing that could actually be done with it -- but the computer could be held in your hands or even sold over the counter, and that fact alone was enough to force a paradigm shift on Silicon Valley.
It was with the invention of the microprocessor that the rest of the world finally disappointed Silicon Valley. Until that point, the kids at Fairchild, Intel, and the hundred other chipmakers that now occupied the southern end of the San Francisco peninsula had been farmers, growing chips that were like wheat from which the military electronics contractors and the computer companies could bake their rolls, bagels, and loaves of bread -- their computers and weapon control systems. But with their invention of the microprocessor, the Valley’s growers were suddenly harvesting something that looked almost edible by itself. It was as though they had been supplying for years these expensive bakeries, only to undercut them all by inventing the Twinkie.
But the computer makers didn’t want Intel’s Twinkies. Microprocessors were the most expensive semiconductor devices ever made, but they were still too cheap to be used by the IBMs, the Digital Equipment Corporations, and the Control Data Corporations. These companies had made fortunes by convincing their customers that computers were complex, incredibly expensive devices built out of discrete components; building computers around microprocessors would destroy this carefully crafted concept. Microprocessor-based computers would be too cheap to build and would have to sell for too little money. Worse, their lower part counts would increase reliability, hurting the service income that was an important part of every computer company’s bottom line in those days.
And the big computer companies just didn’t have the vision needed to invent the personal computer. Here’s a scene that happened in the early 1960s at IBM headquarters in Armonk, New York. IBM chairman Tom Watson, Jr., and president Al Williams were being briefed on the concept of computing with video display terminals and time-sharing, rather than with batches of punch cards. They didn’t understand the idea. These were intelligent men, but they had a firmly fixed concept of what computing was supposed to be, and it didn’t include video display terminals. The briefing started over a second time, and finally a light bulb went off in Al Williams’s head. “So what you are talking about is data processing but not in the same room!” he exclaimed.
IBM played for a short time with a concept it called teleprocessing, which put a simple computer terminal on an executive’s desk, connected by telephone line to a mainframe computer to look into the bowels of the company and know instantly how many widgets were being produced in the Muncie plant. That was the idea, but what IBM discovered from this mid-1960s exercise was that American business executives didn’t know how to type and didn’t want to learn. They had secretaries to type for them. No data were gathered on what middle managers would do with such a terminal because it wasn’t aimed at them. Nobody even guessed that there would be millions of M.B.A.s hitting the streets over the following twenty years, armed with the ability to type and with the quantitative skills to use such a computing tool and to do some real damage with it. But that was yet to come, so exit teleprocessing, because IBM marketers chose to believe that this test indicated that American business executives would never be interested.
In order to invent a particular type of computer, you have to want first to use it, and the leaders of America’s computer companies did not want a computer on their desks. Watson and Williams sold computers but they didn’t use them. Williams’s specialty was finance; it was through his efforts that IBM had turned computer leasing into a goldmine. Watson was the son of God -- Tom Watson Sr. -- and had been bred to lead the blue-suited men of IBM, not to design or use computers. Watson and Williams didn’t have computer terminals at their desks. They didn’t even work for a company that believed in terminals. Their concept was of data processing, which at IBM meant piles of paper cards punched with hundreds of rectangular, not round, holes. Round holes belonged to Univac.
The computer companies for the most part rejected the microprocessor, calling it too simple to perform their complex mainframe voodoo. It was an error on their part, and not lost on the next group of semiconductor engineers who were getting ready to explode from their current companies into a whole new generation of start-ups. This time they built more than just chips and ICs; they built entire computers, still following the rules for success in the semiconductor business: continual product development; a new family of products every year or two; ever increasing functionality; ever decreasing price for the same level of function; standardization; and volume, volume, volume.
It takes society thirty years, more or less, to absorb a new information technology into daily life. It took about that long to turn movable type into books in the fifteenth century. Telephones were invented in the 1870s but did not change our lives until the 1900s. Motion pictures were born in the 1890s but became an important industry in the 1920s. Television, invented in the mid-1920s, took until the mid-1950s to bind us to our sofas.
We can date the birth of the personal computer somewhere between the invention of the microprocessor in 1971 and the introduction of the Altair hobbyist computer in 1975. Either date puts us today about halfway down the road to personal computers’ being a part of most people’s everyday lives, which should be consoling to those who can’t understand what all the hullabaloo is about PCs. Don’t worry; you’ll understand it in a few years, by which time they’ll no longer be called PCs.
By the time that understanding is reached, and personal computers have wormed into all our lives to an extent far greater than they are today, the whole concept of personal computing will probably have changed. That’s the way it is with information technologies. It takes us quite a while to decide what to do with them.
Radio was invented with the original idea that it would replace telephones and give us wireless communication. That implies two-way communication, yet how many of us own radio transmitters? In fact, the popularization of radio came as a broadcast medium, with powerful transmitters sending the same message -- entertainment -- to thousands or millions of inexpensive radio receivers. Television was the same way, envisioned at first as a two-way visual communication medium. Early phonographs could record as well as play and were supposed to make recordings that would be sent through the mail, replacing written letters. The magnetic tape cassette was invented by Phillips for dictation machines, but we use it to hear music on Sony Walkmans. Telephones went the other direction, since Alexander Graham Bell first envisioned his invention being used to pipe music to remote groups of people.
The point is that all these technologies found their greatest success being used in ways other than were originally expected. That’s what will happen with personal computers too. Fifteen years from now, we won’t be able to function without some sort of machine with a microprocessor and memory inside. Though we probably won’t call it a personal computer, that’s what it will be.
It takes new ideas a long time to catch on -- time that is mainly devoted to evolving the idea into something useful. This fact alone dumps most of the responsibility for early technical innovation in the laps of amateurs, who can afford to take the time. Only those who aren’t trying to make money can afford to advance a technology that doesn’t pay.
This explains why the personal computer was invented by hobbyists and supported by semiconductor companies, eager to find markets for their microprocessors, by disaffected mainframe programmers, who longed to leave their corporate/mainframe world and get closer to the machine they loved, and by a new class of counterculture entrepreneurs, who were looking for a way to enter the business world after years of fighting against it.
The microcomputer pioneers were driven primarily to create machines and programs for their own use or so they could demonstrate them to their friends. Since there wasn’t a personal computer business as such, they had little expectation that their programming and design efforts would lead to making a lot of money. With a single strategic exception -- Bill Gates of Microsoft -- the idea of making money became popular only later.
These folks were pursuing adventure, not business. They were the computer equivalents of the barnstorming pilots who flew around America during the 1920s, putting on air shows and selling rides. Like the barnstormers had, the microcomputer pioneers finally discovered a way to live as they liked. Both the barnstormers and microcomputer enthusiasts were competitive and were always looking for something against which they could match themselves. They wanted independence and total control, and through the mastery of their respective machines, they found it.
Barnstorming was made possible by a supply of cheap surplus aircraft after World War I. Microcomputers were made possible by the invention of solid state memory and the microprocessor. Both barnstorming and microcomputing would not have happened without previous art. The barnstormers needed a war to train them and to leave behind a supply of aircraft, while microcomputers would not have appeared without mainframe computers to create a class of computer professionals and programming languages.
Like early pilots and motorists, the first personal computer drivers actually enjoyed the hazards of their primitive computing environments. Just getting from one place to another in an early automobile was a challenge, and so was getting a program to run on the first microcomputers. Breakdowns were frequent, even welcome, since they gave the enthusiast something to brag about to friends. The idea of doing real work with a microcomputer wasn’t even considered.
Planes that were easy to fly, cars that were easy to drive, computers that were easy to program and use weren’t nearly as interesting as those that were cantankerous. The test of the pioneer was how well he did despite his technology. In the computing arena, this meant that the best people were those who could most completely adapt to the idiosyncrasies of their computers. This explains the rise of arcane computer jargon and the disdain with which “real programmers” still often view computers and software that are easy to use. They interpret “ease of use” as “lack of challenge”. The truth is that easy-to-use computers and programs take much more skill to produce than did the hairy-chested, primitive products of the mid-1970s.
Since there really wasn’t much that could be done with microcomputers back then, the great challenge was found in overcoming the adversity involved in doing anything. Those who were able to get their computers and programs running at all went on to become the first developers of applications.
With few exceptions, early microcomputer software came from the need of some user to have software that did not yet exist. He needed it, so he invented it. And son of a gun, bragging about the program at his local computing club often dragged from the membership others who needed that software, too, wanted to buy it, and an industry was born.
Reprinted with permission
Seventh in a series. Editor: Classic 1991 tome Accidental Empires continues, looking at a uniquely American cultural phenomenon.
The founders of the microcomputer industry were groups of boys who banded together to give themselves power. For the most part, they came from middle-class and upper-middle-class homes in upscale West Coast communities. They weren’t rebels; they resented their parents and society very little. Their only alienation was the usual hassle of the adolescent -- a feeling of being prodded into adulthood on somebody else’s terms. So they split off and started their own culture, based on the completely artificial but totally understandable rules of computer architecture. They defined, built, and controlled (and still control) an entire universe in a box -- an electronic universe of ideas rather than people -- where they made all the rules, and could at last be comfortable. They didn’t resent the older people around them -- you and me, the would-be customers -- but came to pity us because we couldn’t understand the new order inside the box -- the microcomputer.
And turning this culture into a business? That was just a happy accident that allowed these boys to put off forever the horror age -- that dividing line to adulthood that they would otherwise have been forced to cross after college.
The 1980s were not kind to America. Sitting at the end of the longest period of economic expansion in history, what have we gained? Budget deficits are bigger. Trade deficits are bigger. What property we haven’t sold we’ve mortgaged. Our basic industries are being moved overseas at an alarming rate. We pretended for a time that junk bond traders and corporate disassemblers create wealth, but they don’t. America is turning into a service economy and telling itself that’s good. But it isn’t.
America was built on the concept of the frontier. We carved a nation out of the wilderness, using as tools enthusiasm, adolescent energy, and an unwillingness to recognize limitations. But we are running out of recognized frontiers. We are getting older and stodgier and losing our historic advantage in the process. In contrast, the PC business is its own frontier, created inside the box by inward-looking nerds who could find no acceptable challenge in the adult world. Like any other true pioneers, they don’t care about what is possible or not possible; they are dissatisfied with the present and excited about the future. They are anti-establishment and rightly see this as a prerequisite for success.
Time after time, Japanese companies have aimed at dominating the PC industry in the same way that similar programs have led to Japanese success in automobiles, steel, and consumer electronics. After all, what is a personal computer but a more expensive television, calculator, or VCR? With the recent exception of laptop computers, though, Japan’s luck has been poor in the PC business. Korea, Taiwan, and Singapore have fared similarly and are still mainly sources of cheap commodity components that go into American-designed and -built PCs.
As for the Europeans, they are obsessed with style, thinking that the external design of a computer is as important as its raw performance. They are wrong: horsepower sells. The results are high-tech toys that look pretty, cost a lot, and have such low performance that they suggest Europe hasn’t quite figured out what PCs are even used for.
It’s not that the Japanese and others can’t build personal computers as well as we can; manufacturing is what they do best. What puts foreigners at such a disadvantage is that they usually don’t know what to build because the market is changing so quickly; a new generation of machines and software appears every eighteen months.
The Japanese have grown rich in other industries by moving into established markets with products that are a little better and a little cheaper, but in the PC business the continual question that needs asking is, “Better than what?" Last year’s model? This year’s? Next year’s? By the time the Asian manufacturers think they have a sense of what to aim for, the state of the art has usually changed.
In the PC business, constant change is the only norm, and adolescent energy is the source of that change.
The Japanese can’t take over because they are too grownup. They are too businesslike, too deliberate, too slow. They keep trying, with little success, to find some level at which it all makes sense. But that level does not exist in this business, which has grown primarily without adult supervision.
Smokestacks, skyscrapers, half-acre mahogany desks, corporate jets, gray hair, the building of things in enormous factories by crowds of faceless, time card-punching workers: these are traditional images of corporate success, even at old-line computer companies like IBM.
Volleyball, junk food, hundred-hour weeks, cubicles instead of offices, T-shirts, factories that either have no workers or run, unseen, in Asia: these are images of corporate success in the personal computer industry today.
The differences in corporate culture are so profound that IBM has as much in common with Tehran or with one of the newly discovered moons of Neptune as it does with a typical personal computer software company. On August 25, 1989, for example, all 280 employees of Adobe Systems Inc., a personal computer software company, armed themselves with waste baskets and garden hoses for a company-wide water fight to celebrate the shipping of a new product. Water fights don’t happen at General Motors, Citicorp, or IBM, but then those companies don’t have Adobe’s gross profit margins of 43 percent either.
We got from boardrooms to water balloons led not by a Tom Watson, a Bill Hewlett, or even a Ross Perot but by a motley group of hobbyist/opportunists who saw a niche that needed to be filled. Mainly academics and nerds, they had no idea how businesses were supposed to be run, no sense of what was impossible, so they faked it, making their own ways of doing business -- ways that are institutionalized today but not generally documented or formally taught. It’s the triumph of the nerds.
Here’s the important part: they are our nerds. And having, by their conspicuous success, helped create this mess we’re in, they had better have a lot to teach us about how to recreate the business spirit we seem to have lost.
Reprinted with permission
Photo Credit: NinaMalyna/Shutterstock
Sixth in a series. Serialization of Robert X. Cringely's classic Accidental Empires makes an unexpected analogy.
The Airport Kid was what they called a boy who ran errands and did odd jobs around a landing field in exchange for airplane rides and the distant prospect of learning to fly. From Lindbergh’s day on, every landing strip anywhere in America had such a kid, sometimes several, who’d caught on to the wonder of flight and wasn’t about to let go.
Technologies usually fade in popularity as they are replaced by new ways of doing things, so the lure of flight must have been awesome, because the airport kids stuck around America for generations. They finally disappeared in the 1970s, killed not by a transcendant technology but by the dismal economics of flight.
The numbers said that unless all of us were airport kids, there would not be economies of scale to make flying cheap enough for any of us. The kids would never own their means of flight. Rather than live and work in the sky, they could only hope for an occasional visit. It was the final understanding of this truth that killed their dream.
When I came to California in 1977, I literally bumped into the Silicon Valley equivalent of the airport kids. They were teenagers, mad for digital electronics and the idea of building their own computers. We met diving through dumpsters behind electronics factories in Palo Alto and Mountain View, looking for usable components in the trash.
But where the airport kids had drawn pictures of airplanes in their school notebooks and dreamed of learning to fly, these new kids in California actually built their simple computers and taught themselves to program. In many ways, their task was easier, since they lived in the shadow of Hewlett-Packard and the semiconductor companies that were rapidly filling what had come to be called Silicon Valley. Their parents often worked in the electronics industry and recognized its value. And unlike flying, the world of microcomputing did not require a license.
Today there are 45 million personal computers in America. Those dumpster kids are grown and occupy important positions in computer and software companies worth billions of dollars. Unlike the long-gone airport kids, these computer kids came to control the means of producing their dreams. They found a way to turn us all into computer kids by lowering the cost and increasing the value of entry to the point where microcomputers today affect all of our lives. And in doing so, they created an industry unlike any other.
This book is about that industry. It is not a history of the personal computer but rather all the parts of a history needed to understand how the industry functions, to put it in some context from which knowledge can be drawn. My job is to explain how this little part of the world really works. Historians have a harder job because they can be faulted for what is left out; explainers like me can get away with printing only the juicy parts.
Juice is my business. I write a weekly gossip column in InfoWorld, a personal computer newspaper. Think for a moment about what a bizarre concept that is—an industrial gossip column. Rumors and gossip become institutionalized in cultures that are in constant flux. Politics, financial markets, the entertainment industry, and the personal computer business live by rumors. But for gossip to play a role in a culture, it must both serve a useful function and have an audience that sees value in participation -- in originating or spreading the rumor. Readers must feel they have a personal connection -- whether it is to a stock price, Madonna’s marital situation, or the impending introduction of a new personal computer.
And who am I to sit in judgment this way on an entire industry?
I’m a failure, of course.
It takes a failure -- someone who is not quite clever enough to succeed or to be considered a threat -- to gain access to the heart of any competitive, ego-driven industry. This is a business that won’t brook rivals but absolutely demands an audience. I am that audience. I can program (poorly) in four computer languages, though all the computer world seems to care about anymore is a language called C. I have made hardware devices that almost worked. I qualify as the ideal informed audience for all those fragile geniuses who want their greatness to be understood and acknowledged.
About thirty times a week, the second phone on my desk rings. At the other end of that line, or at the sending station of an electronic mail message, or sometimes even on the stamp-licking end of a letter sent through the U.S. mail is a type of person literally unknown outside America. He -- for the callers are nearly always male -- is an engineer or programmer from a personal computer manufacturer or a software publisher. His purpose in calling is to share with me and with my 500,000 weekly readers the confidential product plans, successes, and failures of his company. Specifications, diagrams, parts lists, performance benchmarks -- even computer programs -- arrive regularly, invariably at the risk of somebody’s job. One day it’s a disgruntled Apple Computer old-timer, calling to bitch about the current management and by-the-way reveal the company’s product plans for the next year. The next day it’s a programmer from IBM’s lab in Austin, Texas, calling to complain about an internal rivalry with another IBM lab in England and in the process telling all sorts of confidential information.
What’s at work here is the principle that companies lie, bosses lie, but engineers are generally incapable of lying. If they lied, how could the many complex parts of a computer or a software application be expected to actually work together?
"Yeah, I know I said wire Y-21 would be 12 volts DC, but, heck, I lied".
Nope, it wouldn’t work.
Most engineers won’t even tolerate it when others in their companies lie, which is why I get so many calls from embarrassed or enraged techies undertaking what they view as damage control but their companies probably see as sabotage.
The smartest companies, of course, hide their engineers, never bringing them out in public, because engineers are not to be trusted:
Me: "Great computer! But is there any part of it you’d do differently if you could do it over again?"
Engineer: "Yup, the power supply. Put your hand on it right here. Feel how hot that is? Damn thing’s so overloaded I’m surprised they haven’t been bursting into flames all over the country. I’ve got a fire extinguisher under the table just in case. Oh, I told the company about it, too, but would they listen?"
I love engineers.
This sort of thing doesn’t happen in most other U.S. industries, and it never happens in Asia. Chemists don’t call up the offices of Plastics Design Forum to boast about their new, top-secret thermoplastic alloy. The Detroit Free Press doesn’t hear from engineers at Chrysler, telling about the bore and stroke of a new engine or in what car models that engine is likely to appear, and when. But that’s exactly what happens in the personal computer industry.
Most callers fall into one of three groups. Some are proud of their work but are afraid that the software program or computer system they have designed will be mismarketed or never marketed at all. Others are ashamed of a bad product they have been associated with and want to warn potential purchasers. And a final group talks out of pure defiance of authority.
All three groups share a common feeling of efficacy: They believe that something can be accomplished by sharing privileged information with the world of microcomputing through me. What they invariably want to accomplish is a change in their company’s course, pushing forward the product that might have been ignored, pulling back the one that was released too soon, or just showing management that it can be defied. In a smokestack industry, this would be like a couple of junior engineers at Ford taking it on themselves to go public with their conviction that next year’s Mustang really ought to have fuel injection.
That’s not the way change is accomplished at Ford, of course, where the business of business is taken very seriously, change takes place very slowly, and words like ought don’t have a place outside the executive suite, and maybe not even there. Nor is change accomplished this way in the mainframe computer business, which moves at a pace that is glacial, even in comparison to Ford. But in the personal computer industry, where few executives have traditional business backgrounds or training and a totally new generation of products is introduced every eighteen months, workers can become more committed to their creation than to the organization for which they work.
Outwardly, this lack of organizational loyalty looks bad, but it turns out to be very good. Bad products die early in the marketplace or never appear. Good products are recognized earlier. Change accelerates. And organizations are forced to be more honest. Most especially, everyone involved shares the same understanding of why they are working: to create the product.
Reprinted with permission
Photo Credit: San Diego Air and Space Museum Archive
Fourth in a series. Editor, with one, two, three introductions behind, Robert X. Cringely's serialized version of 1991 classic Accidental Empires begins.
Years ago, when you were a kid and I was a kid, something changed in America. One moment we were players of baseball, voters, readers of books, makers of dinner, arguers. And a second later, and for every other second since then, we were all just shoppers.
Shopping is what we do; it’s entertainment. Consumers are what we are; we go shopping for fun. Nearly all of our energy goes into buying --- thinking about what we would like to buy or earning money to pay for what we have already bought.
We invented credit cards, suburban shopping malls, and day care just to make our consumerism more efficient. We sent our wives, husbands, children, and grandparents out to work, just to pay for all the stuff we wanted -- needed -- to buy. We invented a thousand colors of eye shadow and more than 400 different models of automobiles, and forced every garage band in America to make a recording of "Louie Louie", just so we’d have enough goods to choose between to fill what free time remained. And when, as Americans are wont to do, we surprised ourselves by coming up with a few extra dollars and a few extra hours to spare, we invented entirely new classes of consumer products to satisfy our addiction. Why else would anyone spend $19.95 to buy an Abdomenizer exercise machine?
I blame it all on the personal computer.
Think about it for a moment. Personal computers came along in the late 1970s and by the mid-1980s had invaded every office and infected many homes. In addition to being the ultimate item of conspicuous consumption for those of us who don’t collect fine art, PCs killed the office typewriter, made most secretaries obsolete, and made it possible for a 27-year-old M.B.A. with a PC, a spreadsheet program and three pieces of questionable data to talk his bosses into looting the company pension plan and doing a leveraged buy-out.
Without personal computers, there would have been -- could have been -- no Michael Milkens or Ivan Boeskys. Without personal computers, there would have been no supply-side economics. But, with the development of personal computers, for the first time in history, a single person could gather together and get a shaky handle on enough data to cure a disease or destroy a career. Personal computers made it possible for businesses to move further and faster than they ever had before, creating untold wealth that we had to spend on something, so we all became shoppers.
Personal computers both created the longest continuous peacetime economic expansion in U.S. history and ended it.
Along the way, personal computers themselves turned into a very big business. In 1990, $70 billion worth of personal computer hardware and software were sold worldwide. After automobiles, energy production, and illegal drugs, personal computers are the largest manufacturing industry in the world and one of the great success stories for American business.
And I’m here to tell you three things:
1. It all happened more or less by accident.
2. The people who made it happen were amateurs.
3. And for the most part they still are.
Reprinted with permission
Photo Credit: Robert Hope
Third in a series. Editor: In parts 1 and 2 of this serialization, Robert X. Cringely presents an updated intro to his landmark rise-of-Silicon Valley book Accidental Empires. Here he presents the original preface from the first edition.
The woman of my dreams once landed a job as the girls’ English teacher at the Hebrew Institute of Santa Clara. Despite the fact that it was a very small operation, her students (about eight of them) decided to produce a school newspaper, which they generally filled with gossipy stories about each other. The premiere issue was printed on good stock with lots of extra copies for grandparents and for interested bystanders like me.
The girls read the stories about each other, then read the stories about each other to each other, pretending that they’d never heard the stories before, much less written them. My cats do something like that, too, I’ve noticed, when they hide a rubber band under the edge of the rug and then allow themselves to discover it a moment later. The newspaper was a tremendous success until mid-morning, when the principal, Rabbi Porter, finally got around to reading his copy. "Where", he asked, "are the morals? None of these stories have morals!"
I’ve just gone through this book you are about to read, and danged if I can’t find a moral in there either. Just more proof, I guess, of my own lack of morality.
There are lots of people who aren’t going to like this book, whether they are into morals or not. I figure there are three distinct groups of people who’ll hate this thing.
Hate group number one consists of most of the people who are mentioned in the book.
Hate group number two consists of all the people who aren’t mentioned in the book and are pissed at not being able to join hate group number one.
Hate group number three doesn’t give a damn about the other two hate groups and will just hate the book because somewhere I write that object-oriented programming was invented in Norway in 1967, when they know it was invented in Bergen, Norway, on a rainy afternoon in late 1966. I never have been able to please these folks, who are mainly programmers and engineers, but I take some consolation in knowing that there are only a couple hundred thousand of them.
My guess is that most people won’t hate this book, but if they do, I can take it. That’s my job.
Even a flawed book like this one takes the cooperation of a lot of really flawed people. More than 200 of these people are personal computer industry veterans who talked to me on, off, or near the record, sometimes risking their jobs to do so. I am especially grateful to the brave souls who allowed me to use their names.
The delightfully flawed reporters of InfoWorld, who do most of my work for me, continued to pull that duty for this book, too, especially Laurie Flynn, Ed Foster, Stuart Johnston, Alice LaPlante, and Ed Scannell.
A stream of InfoWorld editors and publishers came and went during the time it took me to research and write the book. That they allowed me to do it in the first place is a miracle I attribute to Jonathan Sacks.
Ella Wolfe, who used to work for Stalin and knows a lost cause when she sees one, faithfully kept my mailbox overflowing with helpful clippings from the New York Times.
Paulina Borsook read the early drafts, offering constructive criticism and even more constructive assurance that, yes, there was a book in there someplace. Maybe.
William Patrick of Addison-Wesley believed in the book even when he didn’t believe in the words I happened to be writing. If the book has value, it is probably due to his patience and guidance.
For inspiration and understanding, I was never let down by Pammy, the woman of my dreams.
Finally, any errors in the text are mine. I’m sure you’ll find them.
Reprinted with permission
Second in a series. Editor: Robert X. Cringely is serializing his classic Accidental Empires , yesterday with a modern intro and today with the two past ones. The second edition of the book coincided with release of documentary "Triumph of the Nerds". The intros provide insight into a past we take for granted that was future in the making then. Consider that in 1996, Microsoft had a hit with Windows 95 and Apple was near bankruptcy.
The first edition of Accidental Empires missed something pretty important -- the Internet. Of course there wasn’t much of a commercial Internet in 1990. So I addressed it somewhat with the 1996 revised edition, the preface of which is below. Later today we’ll go on to the original preface from 1990.
1996
In his novel Brighton Rock, Graham Greene’s protagonist, a cocky 14-year-old gang leader named Pinky, has his first sexual experience. Nervously undressing, Pinky is relieved when the girl doesn’t laugh at the sight of his adolescent body. I know exactly how Pinky felt.
When I finished writing this book five years ago, I had no idea how it would be received. Nothing quite like it had been written before. Books about the personal computer industry at that time either were mired in technobabble or described a gee-whiz culture in which there were no bad guys. In this book, there are bad guys. The book contains the total wisdom of my fifteen-plus years in the personal computer business. But what if I had no wisdom? What if I was wrong?
With this new edition, I can happily report that the verdict is in: for the most part, I was right. Hundreds of thousands of readers, many of whom work in the personal computer industry, have generally validated the material presented here. With the exception of an occasional typographical error and my stupid prediction that Bill Gates would not marry, what you are about to read is generally accepted as right on the money.
Not that everyone is happy with me. Certainly Bill Gates doesn’t like to be characterized as a megalomaniac, and Steve Jobs doesn’t like to be described as a sociopath, but that’s what they are. Trust me.
This new edition is prompted by a three-hour television miniseries based on the book and scheduled to play during 1996 in most of the English-speaking world. The production, which took a year to make, includes more than 120 hours of interviews with the really important people in this story -- even the megalomaniacs and sociopaths. These interviews, too, confirmed many of the ideas I originally presented in the book, as well as providing material for the new chapters at the end.
What follows are the fifteen original chapters from the 1992 edition and a pair of new ones updating the story through early 1996.
So let the computer chips fall where they may.
Reprinted with permission
First in a series. February, 2013 -- We stand today near the beginning of the post-PC era. Tablets and smart phones are replacing desktops and notebooks. Clouds are replacing clusters. We’re more dependent than ever on big computer rooms only this time we not only don’t own them, we don’t even know where they are. Three years from now we’ll barely recognize the computing landscape that was built on personal computers. So if we’re going to keep an accurate chronicle of that era, we’d better get to work right now, before we forget how it really happened.
Oddly enough, I predicted all of this almost 25 years ago as you’ll see if you choose to share this journey and read on. But it almost didn’t happen. In fact I wish it had never happened at all…
The story of Accidental Empires began in the spring of 1989. I was in New York covering a computer trade show called PC Expo (now long gone) for InfoWorld, my employer at the time. I was at the Marriott Marquis hotel, the phone rang and it was my wife telling me that she had just been fired from her Silicon Valley marketing job. She had never been fired before and was devastated. I, on the other hand, had been fired from every job I ever held so professional oblivion seemed a part of the package. But she was crushed. Crushed and in denial. They’d given her two months to find another job inside the company.
"They don’t mean it", I said. "That’s two month severance. There is no job. Look outside the company".
But she wouldn’t listen to me. There had to be a mistake. For two months she interviewed for every open position but there were no offers. Of course there weren’t. Two months to the day later she was home for good. And a week after that learned she had breast cancer.
Facing a year or more of surgery, chemotherapy and radiation that would keep my wife from working for at least that long, I had to find a way to make up the income (she made twice what I did at the time). What’s a hack writer to do?
Write a book, of course.
If my wife hadn’t been fired and hadn’t become ill, Accidental Empires would never have happened. As it was, I was the right guy in the right place at the right time and so what I was able to create in the months that followed was something quite new -- an insider view of the personal computer industry written by a guy who was fired from every job he ever held, a guy with no expectation of longevity, no inner censor, nothing to lose and no reason not to tell the truth.
And so it was a sensation, especially in places like Japan where you just don’t write that Bill Gates needed to take more showers (he was pretty ripe most of the time).
Microsoft tried to keep the book from being published at all. They got a copy of the galleys (from the Wall Street Journal, I was told) and threatened the publisher, Addison-Wesley, with being cut off from publishing books about upcoming Microsoft products. This was a huge threat at the time and it was to Addison-Wesley’s credit that they stood by the book.
Bullies tend to be cowards at heart so I told the publisher that Microsoft wouldn’t follow-through and they didn’t. This presaged Redmond’s "we only threatened and never really intended to do it" antitrust defense.
The book was eventually published in 18 languages. "For Pammy, who knows we need the money" read the dedication that for some reason nobody ever questioned. The German edition, which was particularly bad, having been split between two different translators with a decided shift in tone in the middle, read "Für Pammy, weiß, wer ich für Geld zu schreiben".
"For Pammy, who knows I write for money".
Doesn’t have the same ring, does it?
The book only happened because my boss at InfoWorld, the amazing Jonathan Sacks (who later ran AOL), fought for me. It happened because InfoWorld publisher Eric Hippeau signed the contract almost on his way out to door to becoming publisher at arch-rival PC Magazine.
Maybe the book was Hippeau’s joke on his old employer, but it made my career and I haven’t had a vacation since as a result. That’s almost 24 years with no more than three days off, which probably in itself explains much of my behavior.
Accidental Empires is very important to me and I don’t serialize it here lightly. My point is to update it and I trust that my readers of many years will help me do that.
Join me for the next two months as we relive the early history of the personal computer industry. If you remember the events described here, share your memories. If I made a mistake, correct me. If there’s something I missed (Commodore, Atari, etc.) then throw it in and explain its importance. I’ll be be with you every step, commenting and responding in turn, and together we’ll improve the book, making it into something even more special.
And what became of Pammy? She’s gone.
Change is the only constant in this -- or any other -- story.
Reprinted with permission
Photo Credit: urfin/Shutterstock
Yesterday was my 60th birthday. When I came to Silicon Valley I was 24. It feels at times like my adult life has paralleled the growth and maturation of the Valley. When I came here there were still orchards. You could buy cherries, fresh from the fields, right on El Camino Real in Sunnyvale. Apricot orchards surrounded Reid-Hillview Airport in San Jose, where I flew in those early days because hangars were already too expensive in Palo Alto. My first Palo Alto apartment rented for $142 per month, and I bought my first house there for $47,000. I first met Intel co-founder Bob Noyce when we were both standing in line at Wells Fargo Bank.
Those days are gone. But that is not to say that these days are worse.
For almost 36 years people have asked me when is the best time to start a company and my answer has always been the same: right now. New technologies have yielded opportunities we could never have imagined and along the way lowered the cost of entry to the point where anyone with a good idea and a willingness to take risks has a chance to make it big.
These are the good old days.
And so it is time for me to move forward with my life and my so-called career. As a guy with three sons ages 10, 8, and 6, you see, my devil sperm has provided me the opportunity to work until I die. Or more properly it has determined that I must work until I am at least 76, when my last kid graduates from college. Whichever comes first.
A year ago I forecast my own retirement of sorts and so I’m here today to explain better what that means, because it certainly doesn’t mean I’ll stop working or that I’ll even go away.
Blogging no longer works for me as a career. As I’ve explained before, declining ad rates have led to this being no longer a viable occupation, at least for me. So while I’m not going completely away I have to assume even more duties that will limit, somewhat, my presence here. I hope you’ll understand.
One thing I am about to do is write a book -- a very serious book for a very real publisher who has written a very substantial check with the assumption that I’ll deliver 120,000 words a year from today. I have to get it finished soon, you see, before book publishing dies in turn.
Then I have a new startup company -- The Startup Channel -- which I hope you’ll hear more of in coming weeks. We’re about to close our seed round and if we don’t then I’ll just pay for the thing myself, it’s that good an idea. I’m open to investment proposals, by the way, but only accredited investors need apply, sorry (that’s the law).
I’ll continue to blog as often as I can, though mainly about startups, somewhat like I did a few years ago when my startup was Home-Account.com and the topics were mortgages and economics.
So you’ll see more of me and less. I’ll be working harder than ever. But why not? There’ s plenty still to be done, and I don’t feel a day over 59.
Reprinted with permission
Photo Credit: Joe Wilcox
Third in a series. This is my response to the message from Qualcomm Tricorder X-Prize director Mark Winter, who said my objections to his contest design were without merit.
Let me make a point here: this isn’t about me receiving $10 million. We all know that’s not going to happen. It’s about designing a contest that actually encourages innovation. Please read on as I explain.
I appreciate your position, Mark, and might have sent the same reply were I standing in your shoes. However, I am sure I’ve uncovered exactly the sort of poor contest design that may well doom your effort. As such I will go ahead and publish the letter I wrote to Paul Jacobs so my readers can weigh-in on this issue. Certainly it will make your contest more visible.
Bill Joy used to say “not all smart people work at Sun Microsystems,” and by this he meant that there is plenty of useful brainpower outside every organization -- brainpower that is likely to see the germ or find the flaw in any strategy. Well not all smart people work at Qualcomm, Nokia, or the X-Prize Foundation, either. And what worries me about this is the inflexibility engendered in your announcement, which actively discourages the participation of prior art. Why would anyone with something well in hand wait 35 months? For that matter, what makes you think that 35 months from now this prize will even have relevance? What if it doesn’t? Do you just cancel it and say “never mind?”
The proper way to have designed this contest was by setting a goal and an overall ending date, not a date six years out to begin evaluation. If anyone accomplishes the contest tasks prior to that date, they should win. Your design assumes every entrant is starting from scratch. It also assumes every entrant is amateur, because no business these days would plan a 35-month R&D effort toward a single product. Ask Nokia and Qualcomm about that one.
Some of this thinking is simply not thinking while some of it is self-serving thought. You make the point that the X-Prize Foundation is in the business of running these contests, which suggests to me that a 6-7 year time frame probably suits the business model of the Foundation much more than it does the pursuit of this type of knowledge. We’ve seen this before from your organization, notably with the Google Lunar X-Prize, which also seems to have been designed to fail.
Note: In the case of the Google Lunar X-Prize, the X-Prize Foundation changed the rules several times including at one point inserting a delay of more than a year before the “final” rules would be set -- a year during which entrants were supposed to blindly continue raising budgets of up to $100 million.
What I read in your message is an unwillingness to consider changing the contest rules. This is ironic given the immense likelihood that over time you will do just that for any number of reasons. This seems to happen on most of the X-Prize competitions at one point or another. This is the ideal time to correct an obvious flaw, so why not do it?
Or do you think that all smart people actually do work at Nokia, Qualcomm, and the X-Prize Foundation?
All the best,
Bob Cringely
Reprinted with permission
Photo Credit: Creativa/Shutterstock
Second in a series. This message from the X-Prize Foundation is in response to the letter I sent Qualcomm's CEO.
They seem to feel the contest is fine as-is and my objections are without merit.
Dear Bob,
I am the Senior Director in charge of this competition and I appreciate receiving your letter of interest dated January 11. First, let me offer you my highest level of encouragement for your creation of a SIDS monitoring device. As you know, medical technology is one of the most difficult areas to make significant progress in. To make something really work and pass through all the regulatory hurdles in this space is challenging as you point out. Second, my sincere personal condolences on the loss of your child. I understand and respect your total commitment to solve this challenging problem and admire your dedication and passion to address this urgent need.
We announced the Qualcomm Tricorder X PRIZE in January 2012 and spent a year refining the guidelines and structure of the competition, which includes the winning parameters, registration fees, rules and a timeline. The guidelines were finalized and released this month after receiving input from the scientific and medical communities, companies working in this space, and the general public. We are asking teams to develop a medical device that will allow consumers to diagnose a set of 15 diseases and monitor 5 vital signs, independent of a healthcare provider or facility. While we recognize that there are a number of unique new technologies, including yours, that address important public health concerns, we could not include every one of them in this competition. We did choose a range of core and elective conditions that are widely recognized as being significant for public health in North America and also offered a wide range of sensing and interpretive challenges.
As of today there isn’t an integrated personal health device on the market that does everything that we’re requiring in the guidelines. That is the essence of the competition and sets the stage for a unified solution that can capture data from many types of sensors, potentially including yours. As with many innovation competitions and Prizes, there is a registration fee required to participate. The registration fee, which is $5,000 until April 10th, helps us cover operational expenses such as numerous team and judging summits, ongoing communications to many stake holders, comprehensive device testing and judging processes that are essential to staging a fair and objective competition.
Please realize that we are a competition that intends to drive innovation and help to usher in a new digital health marketplace. We are not investors and in fact, we are not even the Judges who will act totally independent of us in determining the finalists. Our goal is stimulate an influx of consumer devices on the market in the near future. Even if you decided not to compete, the overall effect of these competitions will help to lift all boats in the digital health space, including yours we believe.
I encourage you to consider entering the Qualcomm Tricorder X PRIZE and/or Nokia Sensing X CHALLENGE, or join a current pre-registered team and incorporate your device into their submission. Although SIDS is not one of the defined conditions in the requirements it does not mean that the inclusion of your technology would not be advantageous to existing teams who all seek to commercialize their solutions. Inventing, developing, funding and bringing to market medical technology is a very difficult endeavor and a team approach may help. At the X PRIZE Foundation, one of our jobs is to provide a forum for sharing ideas, concepts and new technologies that might have otherwise gone unnoticed. That has been a guiding principle of this and all our competitions. We hope that you will consider becoming involved on this basis.
If you have additional questions or would like to speak further, please don’t hesitate to contact me.
Sincerely
Mark Winter
Senior Director, Qualcomm Tricorder X PRIZE and Nokia Sensing X CHALLENGE
Reprinted with permission
Photo Credit: tanewpix/Shutterstock
First in a series. I wrote a letter to Qualcomm CEO Paul Jacobs. This went out January 11th and was delivered on the morning of the 14th.
The response will be my next post.
Dear Mr. Jacobs:
As a professional blogger I’d normally be posting this letter on my web site but this time I’ll first try a more graceful approach. You see I have a beef with your Qualcomm Tricorder X-Prize and I want you to make some changes.
In 2002 my son Chase died of Sudden Infant Death Syndrome (SIDS) at age 73 days. I wrote about it at the time and received great support from the Internet community. My pledge to do something about SIDS manifested itself in a research project that came to involve online friends in Canada, Israel, Japan and Russia as well as the United States.
Our goal was to create a device that would plug into a power outlet. It would identify and wirelessly monitor all mammal life forms in a room, gathering data about whatever babies, dogs or old people were there, detecting as they entered and left the room and setting off a loud alarm if anyone stopped breathing or their heart rate dropped below a certain threshold. SIDS can’t be cured, you see, but it can be cheated and all that requires is judicious use of a 120 dB alarm to scare the baby back into consciousness. No parental intervention is even required.
Making the product a wall wart meant no batteries needed changing. Making it wireless meant no sensors needed to be attached or removed. It would be a no-brainer, completely plug-and-play.
And we did it.
It took four years but we completed a working prototype. We were going to call it the Tricorder and I even bought the Tricorder domain.
But then we ran out of money, lawyers and medical experts told us there were liability issues, that gaining FDA approval would take years and millions, though why that should be the case for a non-contact device we never understood. By this time, too, I had three young sons and a so-called career to manage so we put the project aside. That was in 2006.
Then last week you announced a $10 million prize seeking almost exactly what we had already built. But your prize rules say I have to pay $5,000 so that 35 months from now you’ll look at the work we have already done.
How stupid is that?
We can claim your prize in 30 days, max, by porting our old code to Android or IOS (our team includes a crack tablet developer who is also an MD and specializes in medical apps). Why shouldn’t we be allowed to? This would give us the funds to finally complete our work and eradicate SIDS. How many lives won’t be saved because of these silly rules?
So please correct this error by changing your prize to allow the immediate recognition of scientific achievement. I’m sure you’d rather succeed earlier than later. The good publicity that will come from a quicker award will be no less sweet. After all, it will have pulled from obscurity technologies that might have been lost forever.
All the best,
Bob Cringely
Reprinted with permission
Second in a series. My last column looked at Apple’s immediate challenges in the iPhone business, while this one looks at the company’s mid-to-long term prospects and how best to face them. The underlying question is whether Apple has peaked as a company, but I think the more proper way to put it is how must Apple change in order to continue to grow?
Even as some analysts are downgrading Apple based on reported cancellation of component orders, saner heads have been crunching the numbers and realized that Apple still has a heck of an iPhone business. So if you are a trader I think you can be sure Apple shares will shortly recover, making this a buying opportunity for the stock.
I deliberately don’t buy any tech shares, by the way, in order not to be influenced or tempted to influence you.
New Markets Strategy
The one bit of advice Steve Jobs gave to CEO Tim Cook was to not ask himself "How would Steve do it?" That’s the Walt Disney quandary that Jobs looked to as an example of how companies shouldn’t behave following the death of a charismatic founder. But I think in the case of Apple it goes further than that, because the company is approaching the point where doing it the way Jobs would have -- if that knowledge can even be conjured up -- probably isn’t anymore the best way for Apple.
The Steve Jobs technique is to grow the company by entering new markets with brilliant solutions, creating whole new product categories. Apple’s emerging problem is that there are fewer such markets to be entered just as it is harder to create products that are obviously and compellingly superior. While this is Tim Cook’s problem it would have been Steve Jobs’s problem, too, had he survived.
When Jobs returned to Apple in late 1996 (and became interim CEO in 1997), he was faced with fixing the company’s financials and rationalizing its product line. Apple had no real product strategy in those Sculley/Spindler/Amelio days and simply made too many things. So the first couple years were spent doing the obvious chores of cutting the company and its product lines down which inevitably yielded better gross margins.
Once he had the company reorganized and ready for action Jobs attacked the market with innovative new products. The first of these was the iMac, a thoughtfully done all-in-one computer. The iMac wasn’t earthshaking but it didn’t have to be. It was elegant and simple and created a new product category, which was the whole point. Jobs’ answer to almost any challenge was to create a new product category.
Now here’s an interesting aside. I read a product review just yesterday of Apple’s newest iMacs saying that as a value proposition the 20-incher isn’t as good as Windows-based all-in-ones from some other companies. I don’t know if this is true or not, but what I find noteworthy is that it took 13 years for such a review to appear.
Maturing Categories
Let’s take this iMac issue a little further, because I think it is a great example of where Apple’s biggest challenge lies. If the 20-inch iMac isn’t notably superior to competing Windows all-in-ones, why is that? It’s because as products and underlying technologies mature it becomes harder and harder to be obviously better. In the case of these iMacs there’s a physical design, processing power, storage, and of course the operating system and applications. The new iMacs are incredibly thin, but what if they were already thin enough? They have massive processing power, but unless you are editing video who would know? The Fusion Drive (if installed) is fast, but SSDs are an option on most Windows all-in-ones. And where’s the Apple touchscreen? I don’t think anyone is arguing that Win8 all-in-ones are better than the new iMacs, but a lot of people would argue that they are better for the money.
This is the first part of Apple’s core challenge: it’s increasingly harder to be demonstrably better within a mature product class. A friend recently cited a similar example in the high-end audio market where new technology has also resulted in a similar leveling at the expense of high end suppliers. The actual difference in sound quality between a high-end preamp for $25,000 and one for $2,500 (or even $799) is much less than it used to be, so why buy the expensive stuff?
Computing technology is doing the same thing. Apple raised the bar and the competition has been forced to bring their A game. In the process there is a certain diminishment of cool as even a grade-schooler can have a smart phone or a tablet. Now what? Apple has elected to back away from the pro market and concentrate on consumers. Computing has become ubiquitous. I’m not altogether sure even Steve himself would have had a magic pill in this market.
Steve Jobs didn’t worry about this much, because he was always looking toward the next new product category, not minding the loss of market share in earlier niches. The problem with Jobs’ attitude, though, is two-fold: 1) there are only so many product categories, and; 2) the bigger Apple becomes the larger a product category has to be to be worth pursuing. This makes the population of possible new product categories even smaller.
As an example let’s consider the Apple TV, which continues to be classified by Apple as a hobby — a tacit admission that the category simply isn’t big enough to be strategic. Yet Apple has sold seven million of the little boxes, illustrating how scale has changed at Apple.
The Problem of Scale
Scale is the second part of Apple’s core challenge. Anything the company does has to scale massively or it will be a drag on sales and earnings. We can see Apple play this out in its internal silicon development, which is the best (and highest margin) way for them to achieve hardware scale. Though Apple doesn’t sell chips in the merchant market, by making its own Systems On Chip the company is revolutionizing each of its product categories from within.
Scale also comes from building an ecosystem around products. iTunes is an example of that for music and video. Apple makes money on the content but the content also builds demand for hardware. It’s a win-win. If Apple enters the TV market as rumored, it can only happen with a massive expansion of iTunes or some new content distribution strategy to give Apple a big chunk of what would otherwise be TV network and producer revenue.
With Apple the size it is now, new product categories not only have to scale, they have to come with scaling already attached.
So where Jobs would have not worried so much about the atrophy of old product categories, choosing instead to simply invent the next big thing, at the scale where Apple operates presently there are fewer categories and those categories can’t reach their potential without sure-fire associated content strategies. This is Tim Cook’s challenge.
The good news for Apple fans is that the company surely has one or more new product categories yet to introduce. Jobs was working on something before he died. So we’ll undoubtedly see at least one more thing. And because Apple has so darned much money and Tim Cook has to think about his legacy, too, there’s the very real possibility that Apple will simply pay to make the content scaling issues go away.
By this I mean that the exact same techniques Apple has used with such success dealing with component suppliers could be used with content suppliers, too. Apple gets the best flash EAM prices, for example, by buying in huge volume and pre-paying for products. This also makes Apple immune to component shortages. What if they did the same thing with television? $10 billion in pre-orders for shows and movies would secure Apple’s content pipeline without any anti-trust concerns with minimal financial impact on Cupertino.
So the best days could well be ahead for Apple. But that can only happen if the company is ready to well and fully leave Steve behind.
First in a series. Has Apple peaked? Yes and no. I think the company still struggles somewhat to find its path following the death of former CEO Steve Jobs. But there’s still plenty happening and room for growth in Cupertino. So let’s start a discussion about what’s really going on there. I thought this might be possible in a single column, but looking down I see that’s impossible, so expect a second forward-looking Apple column.
The catalyst for this particular column is word coming over the weekend from the Wall $treet Journal that Apple is cutting back component orders for the iPhone 5, signaling lower sales than expected. I’m not saying this story is wrong but I don’t completely buy it for a couple reasons.
Not so Fast
It’s from a single unidentified source, which I always find suspect, and the publication makes the interpretation that the order changes are because of slow iPhone sales. How would they know?
"Wait", you protest, "there are now lots of stories in many publications with lots of experts quoted".
Not really. There are many reaction stories (called second day stories in the newspaper game) and the experts quoted aren’t for the most part revealing anything new, they are responding to highly specific questions like "IfApple is in fact scaling-back orders, what could it signify?" That’s not news.
Where people are quoted beyond this they are generally traders and traders love volatility. This extends right back to the Journal, I’d say, which also likes volatility.
Here’s what I believe. Apple has clearly opened up the iPhone 5 to anyone who will buy it. A friend bought his at Costco. Straight Talk Cellular (Walmart) now has them. Every carrier in America has or shortly will (T-Mobile) have iPhones, so that means the market is maturing. Remember the turnover in phones is twice as fast as it is in PCs, so everyone has all new stuff within three years, tops.
We have to be careful about our terms here, so while I doubt that iPhone sales have slowed I’d say that sales growth has undoubtedly slowed, which at Apple traditionally means it is time for a new product category -- or would have been during the Steve Jobs era.
Milk Run
But this isn’t the Steve Jobs era and a characteristic of the Tim Cook era, I believe, will be building out existing franchises farther than Jobs ever bothered to. This does not mean Apple won’t pioneer new categories, but that the company wants to milk the current categories for now.
A case in point is the rumored cheaper iPhone, which marketing chief Phil Schiller reportedly says isn’t happening. I’m not here to call Schiler a liar, but I’m sure he’s dissembling (isn’t that a great word?).
Schiller says "no cheap iPhone". Well that’s BS. He’ll change his mind this spring or summer saying something like "We don’t make cheap anything -- our products are packed with value -- but now we’re introducing a less expensive iPhone".
That’s exactly how Jobs would do it, right?
China Syndrome
Let’s throw in another data point: China. Last week Tim Cook was in China signing a huge iPhone deal with the dominant mobile carrier -- the largest in the world. China is both proud and price-sensitive. What better way to blow open the Chinese market than with a less expensive iPhone?
Cook was quoted as saying that China will become Apple’s largest market. He’ll want to deliver on that sooner, rather than later, and larger numbers of a cheaper model could help do that.
It wouldn’t surprise me at all, in fact, to see Apple do a short-term exclusive for the cheaper iPhone, limiting it to China -- yet another excuse for Phil Schiller to say it isn’t happening...here in the USA.
The iPhone Mini (that’s what I’d call it and Apple will too) will be evolutionary, not revolutionary. It will have a smaller form factor because Samsung has already staked its claim to oversized smartphones. It will have a smaller, cheaper display, but still a Retina Display, along with all the guts of a 4S, further integrated and made cheaper. When the Mini is eventually offered outside of China it will become the low-end phone replacing the current iPhone 4. Apple’ll keep the larger form factor 4S and the 5 until they launch the iPhone 6 at which point the 4S goes away except for the Mini.
It’s important here to understand the purpose of such an iPhone Mini, which is two-fold. The first reason is to hit the Chinese phone market with a big bang, knocking Samsung back on its heels somewhat by making chic the smaller form factor and selling 100 million or more of them in the first year. The second reason for doing an iPhone Mini is to kill feature phones altogether, expanding smart phones to the entire mobile market. This alone doubles the potential market size and will give Apple another two years of robust growth.
More about Apple’s likely trajectory (it’s not all positive) next.
Reprinted with permission
Photo Credit: Alex Wolf/Shutterstock
The U.S. government, which usually is very slow to adopt new technologies, signed an agreement to move much of the Department of Defense to Windows 8. The three-year, $617 million deal for up to two million seats is a good proxy for where American business users are headed. Or is it? Microsoft of course hopes it is, but I think that’s far from a sure thing.
This isn’t just trading Windows XP for Windows 8. The U.S. Navy, which isn’t (yet) included in this deal, only recently signed its own agreement with Microsoft to take the fleet to Windows 7. But Windows 8, being touch-enabled and running all the way from smartphones to super-clusters, is something more. It represents the U.S. government’s best guess as to how it will embrace mobile.
You can read the announcement here. Access to all Microsoft products, blah, blah, blah, but what stands out is the continual use of the term "mobile". This is new.
Windows-to-Go
Here is how this revolution plays out, according to my friends who watch this stuff all day every day. Microsoft has a long relationship with the government as a trusted vendor that will do what's to make its products secure. Microsoft has had FIPS 140-2 certification for RSA and AES since Windows 7. The Windows 8 you and I get with out HP or Dell PC doesn’t include the special security stack that will come with these new government PCs. So Windows, however pedestrian, is viewed by the government as good from a security standpoint.
Android, Linux, and to some extent OS X, are viewed as bad.
iOS, while just as bad as OS X in the view of internal government experts, is in a different category because of internal politics. If you combine phones and tablets iOS is the number one platform in both market share and customer satisfaction. That means if the general loves his iPhone he’ll get to keep his iPhone. Same, probably, for Blackberry. Look for the military Windows environment of the future, then, to allow the use of these devices in an otherwise Microsoft-centric world.
Traditional PCs play a relatively minor role in this emerging view of military computing with the most common device being a tablet or a smartphone. The military, which has played with Windows-based tablets for months, really likes them but hardly anyone else does.
Where a traditional desktop is required it will be with the use of Windows-to-Go (WTG), a USB keychain drive that carries a complete personalized Windows desktop. Every soldier and civilian DoD employee will have his or her personal WTG thumb drive. Desktop PCs will be effectively blank and shared by anyone who is allowed in the room.
What enables WTG is Bitlocker, which is FIPS 140-2 certified. Kingston, Spyrus, and Imation all have workspace USB drives. These and Clover Trail tablets from Intel are viewed as the game changers.
Now this view of the future is very hopeful for Microsoft, but how likely is it to either succeed or become an archetype for non-military organizations?
Personal LAN
I’m not sure many non-military organizations will embrace WTG. It makes sense from both a security and an efficient hardware utilization standpoint, but other than consulting companies I don’t see private employees wandering deskless through their workdays.
I see, too, a possible source of delusion here for Microsoft. Redmond just announced that it has 20 million Win8 users, for example, but how many of those are from this DoD contract? That number could be up to two million yet not a single Win8 copy has yet been shipped under the contract. And those rave tablet reviews coming-in from the front lines, those are compared to what?
If there’s a big picture here I think it has been missed by the military itself. That big picture says that CPUs have reached the price point where we can all afford several of them. The new soldier with his smartphone, tablet, and WTG key will travel with at least three microprocessors where he or she had before at most one.
Soon everything will have a CPU and each soldier will be a LAN of one. In this regard I think the emerging military situation mirrors private industry. But to say that Microsoft will inevitably sit in the center of all that action may just be a lot of wishful thinking on the Pentagon’s (and the company’s) part.
What do you think?
Reprinted with permission
Photo Credit: Joe Wilcox
Third in a series. Some readers of my last column in this series seem to think it was just about the movie business but it wasn’t. It was about the recorded entertainment industry, which includes movies, broadcast and cable television, video games, and derivative works. It’s just that the movie business, like the mainframe computer business, learned these lessons first and so offers fine examples.
Whether from Silicon Valley or Seattle, technology companies see video entertainment as a rich market to be absorbed. How can Hollywood resist? The tech companies have all the money. Between them Amazon, Apple, Google, Intel and Microsoft have $300 billion in cash and no debt -- enough capital to buy anything. Apple all by itself could buy the entire entertainment industry, though antitrust laws might interfere.
Right now these companies are not trying to buy the entertainment industry but to buy access to content and audiences. Their primary goal is disintermediation of cable and broadcast TV networks. The vision held by all is of Americans sitting in our homes buying a la carte videos over the Internet and eating popcorn.
This is unlikely to happen simply because cable companies and TV networks aren’t going to hand over their businesses. If such a transition does take place, and I think it is only a matter of time before it does, the catalyst won’t be phalanxes of lawyers meeting across conference tables. When the real entertainment revolution happens it will be either because of a total accident or an act of deliberate sabotage.
Sabotage is the Way
With accidents so difficult to predict or time, I vote for sabotage.
But sabotage doesn’t come naturally to the minds of big company executives, or at least not executives at the companies I’m naming here. They are hobbled by their sense of scale for one thing. Big companies like to hang with other big companies and tend to see small companies as useless. When elephants dance the grass is trampled. Well it’s time for someone to pay more attention to the grass.
While Silicon Valley has more than enough money to buy Hollywood, Hollywood is unlikely to sell. And even if they sell, it’s unlikely Silicon Valley would get anything truly useful because they’d only be buying a shell. Networks and movie studios don’t typically make anything, they just finance and distribute content.
If you can’t buy Hollywood, then you have to steal it.
What makes Hollywood unique is its continuous output of ideas. When technology companies talk about gaining access to content what they really mean is gaining access to this flow of ideas. For all we might talk about the long tail, what defines Hollywood is new content, not old, with a single hit movie or TV series worth a hundred times as much as something from the library. Intel has no trouble getting rights to old TV series, for example: it’s the new stuff that’s out of reach.
Amazon and Netflix have bought a few original productions between them but the economics aren’t especially good because they have to pay all the costs against what is so far a limited distribution outlet. These companies need to find a way to control more content for less money.
The trick to stealing Hollywood is interrupting this flow of ideas, not just for a show or two but for all shows, diverting the flow to some new place rather then where it has always gone. Divert the flow for even a couple years and the entire entertainment industry would be changed forever.
What if there were no new shows on CBS?
Two Months from Bankruptcy
Here’s where it is useful to understand something about the finances of content production. This $100+ billion business (the U.S. Department of Labor says the U.S. entertainment industry pays $137 billion per year in salaries alone) is driven by cash yet there is very little cash retained in the business. While Apple is sitting on $100+ billion, Disney isn’t, because there’s a tradition of distributing most video revenue in the form of professional fees.
While workers in most industries think in terms of what they make per year, during the heyday of the studio system the currency in Hollywood was always how much any professional made per week. Today the entertainment industry often thinks of what someone makes per day.
The numbers are big, but not that big. George Lucas just sold his life’s work for $4 billion, which would make him a second-tier tycoon in Silicon Valley.
The only person to ever extract more cash from Hollywood than George Lucas was Steve Jobs when he sold Pixar. Ironic. eh?
So the Hollywood content creation system is fueled with cash, but the pockets from which that cash comes are not very deep. Every production company -- every production company -- is two months or less from bankruptcy all the time. They create or die.
So here comes an Intel, say, looking to buy or license content for its disruptive virtual cable system. They attempt to acquire content from the very sources they hope to disrupt. "License us your content, oh Syfy Channel, so we can use it to decrease the value of that same content sold to Time Warner Cable".
Am I the only one who sees something wrong with this picture?
Big Business is Small
Google has taken a somewhat more clever approach with YouTube financing 100+ professional video channels. But this, too, won’t have much impact on the industry since it doesn’t truly divert the content flow from its traditional destination to a new one. And at $8,000 per hour or less, YouTube budgets aren’t exciting many real players in Hollywood.
You get what you pay for.
If your goal is disruption -- and that ought to be the goal here -- then disrupt, damn it! Impede the flow of ideas. That means negotiating not with big companies but with small ones. Because the Hollywood content creation ecosystem is based on a cottage industry of tiny production companies where the real work is done. There is no mass production.
I happen to own a tiny production company, NeRDTV, which produces this rag and other stuff. I’ve laughed on this page from time to time at what my company is supposedly worth based on acquisition costs in Silicon Valley. I know my real value is much lower after negotiating with Mark Cuban, who at one time looked to put some money in this operation.
"It’s a production company" Cuban says. "No production company is worth more than $2 million".
Yours for Just $4 billion
And he’s right. By the time you separate the production infrastructure from the content it produces -- content that is usually owned by someone else who pays for making it -- all that’s left over is about $2 million in residual payments, office furniture, editing equipment and BMW leases.
There are probably 1,000 legitimate production companies in California and 2,000 in America overall. If they are worth an average of $2 million each, buying them all would cost $4 billion.
So the cost of installing a valve on the entire content creation process for the $100+ billion U.S. entertainment industry would be $4 billion. Think of it as an option.
Or cut it a different way: $4 billion would buy a controlling share of every TV pilot and every movie in pre-production. Talent follows the money, so they’d all sell out.
This is a classic labor-management squeeze tactic from the early days of the labor movement, and it works.
Bribe the Peasants
There are no antitrust issues with buying $2 million companies or early investing in productions. They are beneath the radar at both the U.S. Department of Justice or the Federal Trade Commission. Nobody cares about small companies.
Something like this tactic is occasionally used in what’s called a roll-up, where borrowed or investor money is used to buy a basket of companies that are integrated then eventually sold or taken public. But that can’t happen here because of the sneaky antitrust requirements. Apple, if it were to try this, would have to do it through a new content division or subsidiary.
Let’s look at a real world example of what I mean.
My little sister started an Internet business selling to consumers copies of jewelry used on TV shows. Her original idea was to go to the studios and networks and cut revenue sharing deals in exchange for exclusive licenses, but the studios and networks wouldn’t even talk to her. The deals were too small, the money not enough, they claimed, to even justify the legal expense. But most importantly they didn’t want to make a mistake and set the wrong precedent. No precedent was better than a bad precedent.
Undeterred, my sister took a different approach very similar to the one I am presenting here. She found that the jewelry used in TV shows typically comes from a separate wardrobe budget and each such budget is controlled by a wardrobe mistress. If the wardrobe mistress could get jewelry for free then she wouldn’t have to buy it or rent it with that part of the budget falling to her bottom line. Unspent budget = profit. So my sister cut her deals not with the studios or networks but with the wardrobe mistresses — eventually more than 40 of them. Nearly every U.S. primetime TV show uses her jewelry with not a penny going to the networks and it was all perfectly legal.
If Seattle and Silicon Valley make a frontal attack on Hollywood they’ll fail. But if they undermine the current system by bribing the peasants, they’ll succeed for a tenth the money they’d have lost the other way.
Will they follow my advice? Probably not.
Reprinted with Permission
Photo Credit: FotoYakov/Shutterstock
Second in a series. A friend of mine who is a securities lawyer in New York worked on the 1985 sale of 20th Century Fox by Marvin Davis to Rupert Murdoch. He led a group of New York attorneys to Los Angeles where they spent weeks going over contracts for many Fox films. What they found was that with few exceptions there were no contracts. There were signed letters of intent (agreements to agree) for pictures budgeted at $20-$50 million but almost no actual contracts. Effectively business was being done, movies were being made, and huge sums of money were being transferred on a handshake. That’s how Hollywood tends to do business and it doesn’t go down very well with outsiders, so they for the most part remain outside.
Jump to this week’s evolving story about Intel supposedly entering with a bang the TV set-top box business replete with previously unlicensed cable content -- an Over-The-Top virtual cable system. This was expected to be announced, I’m told, at next week’s Consumer Electronics Show in Las Vegas.
Forbes then had a very naive story about how Intel was likely to succeed where others (Apple, Microsoft, Motorola, Netflix, Roku, etc.) had already failed, with Intel’s secret sauce being lots of money (hundreds of millions certainly) to tie-up content.
Yet yesterday Intel made it known there would be no such CES announcement at all and the Wall Street Journal says the problem is content licensing.
I’ll tell you the problem. It’s 1985 all over again and just like my friend the New York lawyer for Rupert Murdoch, Intel is no doubt learning that it is difficult to buy with certainly something that the seller may or may not actually own. Studios and networks are selling and Intel is buying shows they may not even have the right to buy or sell.
Remember how Ted Turner bought MGM then sold the studio but kept the movies so he could play them on WTBS? Something like that.
There’s no business like show business.
Hollywood is a company town that has its own ways of doing business. The rules are just different in Hollywood. Accounting rules are different, certainly. Avatar is the highest-grossing movie in history, sure, but has it made a so-called “net profit?” Nobody knows.
Tax rules are even different for Hollywood. Personal holding companies are for the most part illegal in America, but not in Hollywood, where they have been around for 50 years and are called loan-out companies.
My point here is that when out-of-towners come to L.A. expecting to take over the entertainment business with money alone, they are generally disappointed. Sony buying Columbia Pictures wasn’t the triumph of Japanese capitalism it was presented to be -- it was a chance for the movie guys to steal from the Japanese.
When technology companies try to do business with the entertainment industry they are nearly always taken advantage of. Hollywood can’t help it. Like Rebecca Rabbit, they’re just drawn that way.
Look at Intel and remember this is the company’s third such effort to get a foothold in the entertainment business, where technology companies tend to be seen as rubes ripe for plucking. Apple and Microsoft are right now trying to do exactly the same thing as Intel and they aren’t succeeding, either. Nor will any of them succeed unless they take a more enlightened approach.
My next column will spell out exactly how this could be done (previous column).
Photo Credit: PiXXart/Shutterstock
I wrote here nearly a year ago that there would be no more annual lists of predictions and I’m sticking to that, but I want to take the time for a series of columns on what I think will be an important trend in 2013 -- the battle for Hollywood and home entertainment.
The players here, with some of them coming and some of them going, are Amazon and Apple and Cisco and Google and Intel and Microsoft and maybe a few more. The battleground comes down to platforms and content and will, by 2015 at the latest, determine where home entertainment is headed in America and the world for the rest of the century. The winners and losers are not at all clear to me yet, though I have a strong sense of what the battle will be like.
Why fight for Hollywood? Because making our spreadsheets recalculate faster is no longer enough to inspire new generations of computer hardware. Because Silicon Valley has come to appreciate continuing income streams from subscription services. Because there are legacy players in the TV industry who are easily seen as vulnerable.
Notice I didn’t include Facebook in my list of combatants. Facebook will need the next two years to consolidate its existing businesses before it can even begin to think about Hollywood. Facebook will miss this cycle completely.
Another company I didn’t mention is Netflix. Though this pioneer of video streaming has been around since the 1990s it feels to me more like a acquisition candidate in this battle than a conqueror. Same for TiVO and even Roku: too small.
Still, it’s in Netflix-style Over-the-Top (OTT) streaming content where we’ll see lots of action that will eventually come at some expense to the incumbent cable companies. Some of these will choose sides, like Comcast is apparently doing with Intel, while others may be acquired or just fade away.
Look at both Motorola Mobility (Google) and Cisco trying to get out of their cable box businesses. This does not bode well for their customers, the cable systems.
Content comes down to TV, sports, and movies, with the big attraction of 2012 being sports because of its resistance to piracy. Sports means large live audiences that are unwilling to wait for a torrent to deliver the Big Game two days later. CNN always does well with advertisers when there is a war or a disaster, but sports figuratively is a pre-scheduled war or disaster complete with cheerleaders and good lighting, which is why ESPN is worth more than CNN, MSNBC, and FoxNews put together.
Video games have peaked as a business. It was a great ride but the days of the $60 video game title are limited as mobile, casual and social gaming take over. This has Microsoft, for one, scrambling hard to make its Xbox game console into something like a TV network. Nintendo and Sony are not significant players in this space even though Sony thinks it is. They, too, have peaked, which is surprising given Sony owns a major movie studio.
The dominant video platform or platforms will be determined by the content they carry, so we are going to be seeing lots of money going to Hollywood from Seattle and Silicon Valley, enriching networks and studios alike. Alas, I doubt that this effort, which is well underway, will show any clear winners simply because the major tech companies are going about it so stupidly.
I’ll explain over the next day the right way for technology companies to conquer Hollywood.
Photo Credit: Andrea Danti/Shutterstock
As the father of a precocious first grader I can relate somewhat to the children and parents of Newtown. My son Fallon goes to a school with no interior hallways, all exterior doorways, and literally no way to deny access to anyone with a weapon. Making this beautiful school defensible would logically begin with tearing it down. But the school design is more a nod to good weather than it is to bad defensive planning. The best such planning begins not with designing schools as fortresses or filling them with police. It doesn’t start with banning assault weapons, either, though I’m not opposed to that. The best defensive planning starts with identifying people in the community who are a threat to society and to themselves and getting them treatment. And our failure to do this I generally lay at the feet of Ronald Reagan.
I’ve written about Reagan here before. When he died in 2004 I wrote about a mildly dirty joke he told me once over dinner. It showed Reagan as everyman and explained to some extent his popularity. Also in 2004 I wrote a column that shocked many readers as it explained how Reagan’s Department of Justice built brick-by-brick the federal corrections system that it knew would do nothing but hurt America ever since, making worse both crime and poverty all in the name of punishment.
At the same time Reagan was throwing ever more people into prison he was throwing people out of mental institutions -- a habit he adopted as California governor in the late 1960s. When he came into office President Reagan inherited the Mental Health Systems Act of 1980, a law that passed with huge bipartisan support and was intended to improve the quality of community mental heath care. Reagan immediately killed the law by refusing to fund it, thwarting the intentions of Congress.
Reagan was offended by the entire idea of public health policy: remember Just Say No?
The Reagan administration cut funding for mental health treatment and research throughout the 1980s and it has never recovered. The Administration changed Social Security policy to disenfranchise citizens who were disabled because of mental illness, making hundreds of thousands homeless. What they called the New Federalism resulted in mental health treatment moving from the public to the private sector and becoming mainly voluntary: mentally ill people had to want to get better and then generally had to pay for their own treatment. No wonder it didn’t work.
Jump to Newtown just over a week ago where 20 year-old Adam Lanza managed to slip virtually unnoticed through the mental health system. Anyone who knew him knew he was troubled, but his family had enough money to keep him out of the system. It was assumed the family would care for him, keeping him out of trouble, too, but they didn’t.
Today, thanks to the Internet and laws supporting victims’ rights, I can find where convicted sex offenders live in my neighborhood, but I can’t find my local Adam Lanza. And maybe that’s okay and my Adam deserves some privacy. But not only can’t I find him, neither can the local police, local medical officials, or even the FBI. We don’t keep track of these likely threats to our communities when it would be so easy to do so. It doesn’t even require Big Data, just plain old little data that’s been sitting all along with educators, health care professionals, gun sellers and pharmacists.
That’s what we should do in response to Newtown but instead we’ll now have a big argument about banning guns or putting police in schools. Probably very little will be done to simply identify and treat the hostiles within our society.
We didn’t do it when the wacko was named John Hinckley Jr. and the victim was Ronnie Reagan, himself, and we probably won’t do it now.
Reprinted with permission
By now most of us have read or heard that Instagram (now part of Facebook) proposes a change to its terms of service to allow the company to use your pictures and mine in any fashion it chooses, including selling the pics to third parties. So if you don’t want your baby pictures to risk being used in a beer ad, we’re told, you should close your Instagram account by January 15th. One pundit called this move Instagram committing suicide, but I think something else is going on.
Can’t you just see the meeting at Facebook in which this idea was first presented? ”It’s a whole new revenue stream!” some staffer no doubt howled. “If our users are oblivious or stupid enough to let us get away with it, that is. Maybe we can sneak it through over Christmas”. We’ll see shortly, won’t we?
But now, Instagram says it was all a huge mistake, that users own their pictures and there’s no way Facebook is going to sell them to anyone -- but the company hasn’t yet revealed alternate legal language, which they should have been able to cobble up in an hour or two. The underlying problem of mean-spirited, self-serving, over-reaching terms of service is still with us at Instagram and almost everywhere else.
The revised terms of service were stupid and couldn’t stand. Let’s hope in their next attempt to grab rights (because that’s what this whole thing was about and probably still is) Instagram and Facebook treat their users fairly.
This situation is like the story my sister tells of when she worked in the legal department at EsMark (Smith Premium Hams and other food brands), where they once tried to trademark the term soup. Lawyers with time on their hands will come up with the craziest things to burn billable hours.
Then there’s the two steps back, one step forward theory, which suggests there are actually two land mines contained in the new terms of service and Facebook’s plan is to cave on one while trying to scoot the other one through unnoticed. Click on the ToS link above and see if you can find the second bullet.
Finally there’s the exit strategy theory, in which Instagram founders, now headed out the door with their $1 billion, have left a booby trap behind to destroy their creation so it won’t be an obstacle to their next very Instagram-like startup.
My money’s on soup.
Reprinted with permission
Photo Credit: HomeArt/Shutterstock
Most of us have had mentors, and when it came to becoming a writer three of mine were the late Bill Rivers at Stanford, who taught me to think and not just report; legendary book editor Bob Loomis at Random House, who felt I might be able to stack enough of those thoughts together to fill a book; and a guy most of you know as Adam Smith, who let me copy his style.
Smith, named after the English economist and writer, helped start both New York and Institutional Investor magazines while at the same time punching out books like The Money Game and Paper Money -- huge best sellers that taught regular people how the financial system really worked. That gig explaining the inner workings was what appealed to me. So 30 years ago, having been recently fired for the second time by Steve Jobs, I went to New York and asked permission of Smith to imitate him, though applying his style to technology, not finance. Many such impersonators exist, of course, but I was apparently the first (and last) to ask permission.
And so we became friends, Smith and I, for half of my lifetime and more than a third of his (he, like many of my friends, is now over 80). Smith holds court these days on 5th Avenue at a hedge fund called Craig Drill Capital -- a pocket of integrity and thoughtfulness you’d think could not exist in a world so devoted to high frequency trading. When I’m in New York I sometimes visit Smith and he introduces me to his friends, one of those being economist Dr. Al Wojnilower.
Dr. Al started working at the New York Fed two years before I was born and spent 22 years as chief economist at Credit Suisse First Boston. Sixty years on he’s still explaining where the financial world is going and why, somehow doing so without a supercomputer in sight. Dr. Al is the best of the best when it comes to understanding those inner workings, in this case of our economy and the world’s. And that’s why he’s the author of the only guest post I’ll ever print in this rag. It’s about the so-called “so-called fiscal cliff” we’re so worried about. His explanation is simple, untainted, and worth reading and he’s allowing me to reprint it here. I’ll come back at the end with a comment.
One more thing: the reason I am writing about this rather than whether the iPad Mini will soon get a retina display is because this is much more important to us as a nation and a world. I’ll get back to technology tomorrow.
FALLING OFF THE FISCAL CLIFF
Dr. Albert M. Wojnilower
December 14, 2012
Although most observers have long understood that fiscal policy is tightening, many may have underestimated the severity of the tightening scheduled for 2013, and the difficulty of reconciling the conflicts of ideology and personal ambition that separate the parties who would have to agree to any mitigating "deal". While the contestants joust, the economy is already falling off the fiscal cliff.
Most of the public remains blissfully unaware of the scale of the problem. They will be aghast when, soon after year-end, they experience reduced take-home pay and higher tax bills, as well as unforeseen job losses at the many entities, both public and private, that depend, both directly and indirectly, on federal funding and contracts.
Monetary policy has been hard at work to produce sharply lower long-term interest rates, an easing of credit availability, and a recovery in home and stock prices. In recent months (some three years after the end of the Great Recession), households have finally responded by stepping up sharply their purchases of homes, autos, and other durable goods. The increased spending has reduced the rate of personal saving to near its pre-recession lows. Saving will narrow further as unanticipated tax increases bite into incomes until households are forced to curtail their outlays once again, bringing on a new recession.
The improvement in business due to the increased buoyancy of the household sector has been partly offset by reductions in military and state and local government outlays. But more ominous is the fact that business capital investment, which had earlier sustained the economy by rising at double-digit rates, is now actually shrinking -- notwithstanding record profit margins and the low credit spreads brought about by Federal Reserve policy. The decline reflects mounting fear that the impending setbacks to household incomes will halt or reverse the upward momentum in consumer spending, which is the chief source of business revenues. The indifference of elected officials to such a disastrous reversal is yet another reason for the widespread loss of confidence in governmental competence.
Growth in credit and debt is a vital attribute of modern economic systems. The funds that I spend to buy your products and services must be re-spent by you on other goods and services. To the extent you put the money in the mattress (that is, fail to recirculate your revenues), the flow of incomes is reduced. Even if you use the funds to buy financial assets rather than goods and services, total incomes will be lowered until the ultimate recipients spend the money. And if the funds are used to pay off debt owed to financial institutions, the income flow is reduced until someone else borrows in order to spend.
The current prospect is that government borrowing will be reduced more rapidly in coming years, as additional revenues are raised from households in ways that compel consumers to curtail their borrowing and spending. This in turn is liable to provoke businesses also to tighten their belts. The result will be that total credit, debt, and incomes grow more slowly, or even shrink. So will GDP. If the shrinkage in incomes and GDP is substantial, the federal budget deficits that we are trying to reduce may actually get larger.
Perhaps, as has happened in recent years, last-minute political agreements will be adopted that, although advertised as cutting spending and raising revenues, actually accomplish the opposite. The damage from Hurricane Sandy may move the fiscal debate in that direction. Natural disasters tend to strengthen business in the short run, because they impel large public and private expenditures for repairing the damage. That sort of budget compromise, if it were of a longer-term nature, would be the best possible outcome of the current fiscal cliff negotiations. It would offer hope that, after a modest fiscal shock had been absorbed, GDP growth might reach 2½ percent later in 2013 (compared to less than 2 percent this year) and continue to accelerate.
Unfortunately, no such token agreement is likely, since President Obama is himself an austerity advocate; he just wants to distribute the austerity in different ways than the Republican opposition. An agreement to raise revenues and lower spending along "Simpson-Bowles" lines, the most probable sort of compromise (if there is a compromise), likely would lead to years of GDP growth limited to about 1½ percent or less, as domestic business investment continues to languish. Business invests when the economy is expected to grow, not when it is condemned to austerity.
Conceivably, no compromise will be reached at all, mainly because of deadlock on the issue of extending the debt ceiling. Unless the ceiling is abolished, or raised in unassailable fashion for a number of years, any agreement will be worthless. The Administration could not accept an agreement that had to be renegotiated every few months to avoid a government shutdown. And without an agreement, the economy would topple over the fiscal cliff into a recession that has no visible means of exit. Business capital spending geared to the future would collapse.
Awareness of these dangers is bound to have a major effect on the Federal Reserve’s policy decisions. Lower interest rates and easier credit have played a key role in strengthening business investment, raising real estate and stock prices, and promoting the recovery in housing starts and consumer buying. As long as stringent fiscal restraint persists, so will monetary ease. Although the current technique of "quantitative ease", i.e. the large-scale buying of Treasury and mortgage-backed securities in order to lower longer-term rates of interest, may eventually be subject to diminishing returns, it seems to be working well for now. The Federal Reserve has also announced specific thresholds of 6½% for unemployment and 2 1/2 pecent for inflation to underline its commitment to continue aggressive ease until economic growth is satisfactory.
This suggests that, absent a benign fiscal agreement, high quality bond yields may well decline even further. Meanwhile, the cost-of-living index will be sustained by the increasing prices of utilities, transit, education, and medical care, as governmental supports are reduced.
As befits the season, we cross our fingers and depend on "good will towards men" for continued prosperity.
Okay I’m back. As I read it Dr. Al puts a pox on both their houses. Neither Republicans nor Democrats have viable public positions for growing the economy out of its current mess, and growing out of messes is pretty much the only method we’ve ever had as a nation. So one or both sides don’t actually mean what they say OR they are simply stupid. Could be both. And "good will towards men" means a grudging compromise of sorts where both sides complain about what they’ve had to give up while they achieve what they were actually aiming for all along.
We’ll soon see…
Reprinted with permission
Just weeks after I wrote a column saying Apple will dump Intel and make Macintosh computers with its own ARM-based processors, along comes a Wall Street analyst saying no, Intel will take over from Samsung making the Apple-designed iPhone and iPod chips and Apple will even switch to x86 silicon for future iPads. Well, who is correct?
Maybe both, maybe neither, but here’s what I think is happening.
Apple is dependent on Samsung for making most of its Cupertino-designed chips, yet Apple has grown to hate Samsung over time, seeing the South Korean company as an intellectual property thief. So Apple wants out of the relationship, this much is clear to everyone.
There is only so much semiconductor fab space in the world, and Apple is about the biggest customer of all so the company can’t go just anywhere and hope to score the 200 million chips per year they need.
Yes, 200 million.
The biggest fabs besides Samsung are TSMC in Taiwan and Intel. There are lots of news stories about Apple talking to TSMC at a time when that company is also massively -- and oh, so conveniently -- expanding its production capacity. TSMC has the inside track for Apple’s business, we have all been led to believe.
So what’s up with Intel?
Well Intel has excess fab capacity, too, and Intel would very much like to keep Apple as a customer. The manufacturer can fab Apple’s A5 and A6 chips, sure, especially if it keeps Apple buying i5’s and i7’s for the Mac.
Apple could go with TSMC, could go with Intel, or, heck, could go with both, but the fact that iPad's maker is talking with two companies and probably playing them against each other to get a better deal is kind of a no-brainer, don’t you think?
But what about the part where the iPad goes x86 on us?
I suspect that’s just an Intel fantasy.
"We’ll show you the iPad on x86 will be so much better, let us show you, let us pay for porting the software", I’m guessing Intel said. After all, that’s exactly what Intel did years before when Apple first considered dumping PowerPC.
It’s exactly the sort of thing that Intel would offer and exactly the sort of offer that Apple would accept because -- what the heck -- it costs nothing and might score another percent or two from TSMC.
But this doesn’t at all mean Apple will dump its own chips for Intel’s in the iPad or any other product line. That would take a design miracle on Intel’s part and it’s been awhile since Santa Clara pulled off one of those.
Apple’s still intent on designing its own future chips, though if the price is right I’m sure let Intel build them.
The deeper dynamic here is what I find really interesting, though. Intel is rudderless and looking for purpose. Apple is cranky and domineering. What this probably means is Intel will go from being Microsoft’s bitch to being Apple’s.
Reprinted with permission
Corporations, especially big American ones, file lawsuits all the time for many reasons. Often they sue to force others to comply with agreements or to punish non-compliance with the law. But sometimes they sue, well, just because they can. I suspect that is what’s happening in Hewlett Packard’s current fight over Autonomy, the UK software company HP bought two years ago for $11.1 billion. The HP board seems determined to demonize Autonomy founder Mike Lynch for being smarter than they are.
Given the smarts that HP board has shown in recent years, we may all be at risk of being sued by the company.
HP, business faltering with no mobile strategy to speak of and stock price dropping, has looked like stupid-on-a-stick for years now. A succession of bad CEO hires (starting I believe all the way back with Lew Platt) and bad acquisitions compounded by juvenile boardroom behavior (remember the illegal phone taps?) has rightly cost the company in both reputation and market cap.
Yes, HP overpaid for Autonomy. Anyone who looked at the deal at the time could see that -- anyone who wasn’t working at HP, that is. Oracle’s Larry Ellison certainly said it at the time -- the price was insane. Now HP wants to call it fraud when what actually happened was probably more along the lines of what they call in the UK fast business.
Mike Lynch didn’t extract that $11.1 billion from HP at gunpoint; the company asked to pay it.
And now, to avoid embarrassment it seems, HP revises history. Top brass was duped, their Wall Street advisers on the deal (15 firms!) were duped, too -- tens of millions in fees paid apparently for nothing. The numbers now make no sense so the books must have been cooked and heads will roll as a result, claims HP, explaining the $8.8 billion impairment charge they are taking this quarter, effectively saying the company threw away that much money.
Except we’re likely to find months or years from now that the books weren’t cooked at all. Larry Ellison saw through them. HP just paid too much. And though the deal was made by the hapless Leo Apotheker, it closed under current CEO Meg Whitman, who could have paid a breakup fee and walked away but didn’t. So Meg, who has a hefty ego investment in being seen as the company savior, had to have been deliberately duped, goes the new reality, hence the lawsuit.
Remember this was HP’s second $8+ billion impairment charge in a row, following by a quarter the write-off of most of HP’s huge investment in EDS, another bad purchase I’ll cover at some length aother time.
Fool me once, shame on you; fool me twice shame on me.
Shame on you, Meg.
Reprinted with permission
Photo Credit: drserg/Shutterstock
My son Fallon, who is six and still hasn’t lost any teeth, has a beef with Apple, iTunes, and the iOS App Store. "Apple is greedy", Fallon says. But he has come up with a way for the company to improve its manners through a revised business model.
Fallon would like to buy more apps for his iPod touch, but the good ones cost money (what Fallon calls computer money) and he has been burned in the past by apps that weren’t really as good as the reviews suggested, probably because the reviewers weren’t six.
"If I buy an app and I don’t like it, I want Apple to give me my money back", Fallon says. "Or maybe they can keep a little of it. Here’s my idea. If I buy an app and delete it in the first hour I get all my computer money back. If I delete it after a day Apple can keep 10 pennies from every dollar. If I delete it after two days Apple can keep 20 pennies. If I keep the app for 10 days or more I can’t get any money back".
"So it’s like renting to own?" I ask.
"Maybe. I’m not sure. Don’t ask me these things, Daddy, I’m just a kid".
This Apple doesn’t fall very far from the tree.
Reprinted with permission
Photo Credit: Liusa/Shutterstock
Two days ago, Paul Otellini resigned his position as CEO of Intel. Analysts and pundits weigh-in on the matter, generally attributing Otellini’s failure to Intel’s late and flawed effort to gain traction in the mobile processor space. While I tend to agree with this assessment, it doesn’t go far enough to explain Otellini’s fall, which is not only his fault but also the fault of Intel’s board of directors. Yes, Otellini was forced out by the board, but the better action would have been for the board to have fired itself, too.
If there was a single event that triggered this end to Otellini’s tenure at Intel I’m guessing it is Apple’s decision to abandon Intel chips for its desktop computers. There has been no such announcement but Apple has sent signals to the market and the company doesn’t send signals for fun. The question isn’t if Apple will drop Intel but when and the way product design changes are made the when is not this Christmas but next.
I’m sure that Intel just lost its second or third-largest customer, a company important not just for its size but for its position as a design leader in the desktop space. This alone would have doomed Otellini.
Bitter Departure
But here’s the thing to notice: Otellini resigned this week not just from his position as CEO but also from the Intel board. This is no retirement and it is more than just a firing, too -- this is something ugly. Normally Otellini would have remained on the board for a year or two and that he isn’t suggests that his relationship with the board is totally poisoned.
Whose fault is that? It is the fault of both sides.
The simplistic view most people have of boards of directors are that the CEO runs the company and the board hires and fires the CEO, simple as that. If that was the heart of it, though, there would be no need for committees and sub-committees and board meetings could be done on the phone with an up or down vote four times per year. Modern boards share power to some extent with the CEO, they help set company policies, and they are responsible for setting off alarms from time to time.
In the case of Intel, the alarms have been at best muted and it is pretty easy to argue that the board simply didn’t do its job any better than did Otellini.
Intel under Otellini has been a model of Bush era corporate responsibility, which is to say manically cutting costs while doubling down on its desktop processor business and giving little to no thought to market shifts like the current one to mobile that could screw the whole business. Intel spent a decade fixated solely on fighting AMD, a battle it won a long time ago yet still keeps fighting. That was Otellini’s fault but it was also the Intel board’s fault.
Wrong Fight
The company was too busy fighting AMD to notice the rise of mobile. And while the pundits are correctly saying ARM-this and ARM-that in their analysis of the Intel mobile debacle, the source of the successor technology is less important than the fact that the two largest high-end mobile manufacturers of all -- Apple and Samsung -- are making their own processors. They will never be Intel customers again.
It’s like Mitt Romney talking about the 47 percent: no matter what Intel does -- no matter what -- the company will be a minority player at best in the mobile space. This explains, too, the $50 billion in market cap that has been effectively transferred from Intel to Qualcomm over the last five years.
Intel had a chance to buy Qualcomm twice, by the way. Discussions happened. But twice Intel walked away and one of those times the guy leading the departure was Paul Otellini. Intel could have owned Broadcom, too, with the same story, but they (board and all) were too busy being fat, dumb, and happy fixating on AMD.
Here’s what should happen at Intel now. Company vision has failed from top to bottom. It’s time for new leadership including a new board, because the existing board hasn’t shown itself competent to replace Otellini -- who by the way should be replaced, since he‘s as clueless as the board that’s firing him.
Intel needs new leadership and a bet-the-company move toward dominating some new technology, I’m not sure which one, but there are several from which to choose.
Given that the Intel board isn’t firing itself, I expect they’ll hire another bad executive to replace Otellini and Intel’s fall will continue. It’s still a rich and profitable company and can go a decade or more with a cargo cult corporate culture based on hope that desktops will return.
Reprinted with permission
Photo Credit: Viorel Sima/Shutterstock
A couple weeks from now we’re going to start serializing my 1992 book Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can’t Get a Date. It’s the book that was the basis for my 1996 documentary TV series Triumph of the Nerds and ultimately led to this column starting on pbs.org in 1997.
What goes around comes around.
We’ll be serializing the complete 1996 paperback edition, which is 102,000 words in length, pumping the book onto the intertubes at around 2,000 words per day. In about 51 days, give or take a bit, we’ll put the entire work on the web with no ads and no subscription fee, just lots and lots of words.
Collective Thought
Our ultimate goal in doing this is to prepare yet another revised edition for 2013 but to do it in a completely new way -- with your help. We’re going to publish the book online as a blog and ask you to comment on it. Tell us what’s funny, what’s moving, what’s simply wrong, and tell us how you know that. If you were there at the time, say so. If you remember it differently than I did, say that, too. We’ll gather all those comments, I hope thousands of them, and my book buddy Parampreet Singh and I will carve the best of them into a new annotated version of the book that will not only expand the past but also extend into the future.
Those who want their submissions credited will get their wish. Those who want to remain anonymous can do that, too.
A few weeks after the serialization is over we’ll publish a hybrid ebook you can toggle back and forth between 1996 and 2013, with the 2013 version being probably twice as long or more -- at least 200,000 words.
Of course I do this so my wife can buy shoes, but it’s more than that. Accidental Empires was a seminal book that inspired a lot of people to become involved in technology and even to start their own businesses. You’d be amazed at the number of successful companies that were inspired by that book -- a book that is lost to the current generation of startup founders. If we can bring back the best parts and make them even better and more relevant to today we can inspire hundreds of more such companies -- and buy shoes.
We’ll get this going as quickly as we can but right now I’d like to throw an idea out for your consideration. There’s a chapter in the book (I’ve been re-reading it) about the seven-plus or minus-two numerals that we all can keep in our short term memories at any moment. I presented this as a figure of merit for nerds since the best programmers in those days were ones who seemed capable of keeping their code -- all of it -- in their heads at one time. If this was a good proxy for programming ability, I suggested, then we ought to have a contest to find the best short-term memories in America and see if those people could become great programmers.
Pure Genius
Lately I’ve been thinking of something very similar, though tailored a bit for the current era.
America is dropping behind in technology because our education system is fading, we’re told. Now this doesn’t happen to be true at all as I showed recently with six different studies (1, 2, 3, 4, 5, 6) funded mainly by the US Department of Defense. But what is true, I think, is that the rest of the world is catching up. America’s inherent technical advantage isn’t what it used to be. This is point #1.
Point #2 is that India and China, especially, are emerging as technology powerhouse nations primarily on the basis of their immense populations. If technical aptitude is equally distributed and nurtured, then countries with three times the population of the United States will nurture three times as many geniuses. The only way to compete with that is to: 1) get those foreign geniuses to move to America, or; 2) come up with a more efficient way of recruiting the best and the brightest American students to high tech -- to increase our own genius yield.
Point #3 has everything to do with the role of geniuses in building new industries: they are absolutely vital. I made this point very strongly in Accidental Empires, that the function of the genius is to make possible advances that would be otherwise impossible. What this means in the technical and ultimately economic competition among nations is that a few very smart people can make the difference. We are mistaken to some extent, then, when we worry about average test scores and average performance. Sure these things are important, but they aren’t the key to future industries and breakthroughs, since those will be made pretty much entirely by a very small number of quite non-average people.
Geek Idol
Finding and nurturing those non-average folks, then, is not only a function vital to continued American success as a world power, it is also a heck of a lot easier to do than jacking-up everyone’s SAT scores by 50 points.
Which brings me back to the idea of a test or a contest that I’ve been calling Geek Idol.
America and the world are mad for talent competitions so I think we should have one for finding the best people to become computer scientists and engineers. Let’s start a discussion right here of what such a competition would look like, what it would measure, and how it would work. Remember this will only scale properly if we also make it entertaining. It has to be fun or it won’t happen. And I’m quite determined that this should get a chance to happen.
Once the idea is fleshed out a bit, I predict that some person or organization with money to spare will emerge to fund it.
If we envision it they will come.
Reprinted with permission
Photo Credits: NinaMalyna/Shutterstock
Today is a big day for Microsoft, with the Windows 8 and tablet launches, and potentially a very big day, too, for Microsoft CEO Steve Ballmer. It had better be, because some pundits think Win8 is Ballmer’s last hurrah, that he’ll be forced to step down if the new operating system isn’t a big success. That might be true, though I have a hard time imagining who would replace Ballmer at this point and how the company would change as a result. I’m not saying there isn’t room for improvement -- heck, I’m among those who have called for Ballmer to go -- I’m just not sure what would be any better. More on that in a future column.
Today, rather than look to the future or even to Windows 8, I’d like to write more about Ballmer, putting his reign at Microsoft into some context.
Ballmer became Microsoft’s CEO in 2000, taking over for Bill Gates, who had run the company for the previous decade. It’ hard to imagine that Ballmer has been running Microsoft longer than Gates did but it’s true.
Back in 2001, in addition to writing this column, I was a columnist for Worth magazine -- a monthly business book that lives on today in name only. The magazine failed years ago but the title was eventually sold to The Robb Report, which covers conspicuous consumption and luxury goods. So while there is a Worth published today it isn’t the one I wrote for and, interestingly, they don’t own the back issues (I checked).
In 2001, I was asked to write for Worth its cover story on the CEO of the year. Ballmer shared the award that year with -- get ready for a shock -- Enron CEO Jeffrey Skilling. I wasn’t asked to write about Skilling, just Ballmer. Of course Skilling was out of Enron just three months later as that house of cards began to fall. Five years after that, Skilling was in a minimum security federal prison in Littleton, Colorado where he remains today, not due to be released until 2028.
Here’s the story I wrote about Ballmer back in 2001, which I’m fairly sure is unavailable online anywhere except from me. Remember this was 11 years ago. The Ballmer of today isn’t dramatically different from the one I describe. He’s still willing to bet big and expects to win. Note that .NET and Xbox, both mentioned in this piece as important new products for Microsoft, have been just that. The big question for 2013, of course, is whether Ballmer can pull off something similar again?
"Use the picture where I look friendly!" Microsoft CEO Steve Ballmer boomed to Worth design director Deanna Lowe during the photo session for this month’s cover. "When you work at Microsoft, you always try for friendly". That could be the theme for Ballmer’s reign as head of the world’s largest software company, a role he assumed a year ago from Bill Gates, who often acted as though friendliness was not in his job description. And that’s the whole point, because it is Ballmer’s job to remake Microsoft for the post-personal computer, post-Department of Justice, post-Gates reality of the new century. But don’t be fooled by Ballmer’s legendary exuberance, because he’s a shark, too, just a shark of a different color.
Ballmer’s challenge, and what makes him a good CEO, is different from that of most other chief executives. With market share big enough to attract the attention of government antitrust lawyers, with $10 billion in annual profits, and $27 billion in available cash, Ballmer’s job is to keep that money machine oiled and running smoothly. This is harder than one might expect. It’s not just the economic downturn or even the maturing personal computer market that presents the greatest challenge. It’s what Ballmer calls the "large number problem" -- simply that it is hard to keep sales and earnings growing at 20 percent a year when a company gets to be the size of Microsoft. Yet it was exactly that kind of sustained growth that made Microsoft stock the darling of the '90s and made Ballmer, himself, a billionaire. The challenge is finding new ways to repeat old results.
With year-on-year PC sales dropping, Microsoft can’t count on growth to be driven by traditional customers like Compaq, Gateway and Dell. Nor are new Internet businesses the growth center that Microsoft, and almost everyone else, thought they would be. Ballmer is quietly moving out of those operations -- Expedia, Citysearch and others -- closing them, selling them outright or taking partners to share risk. Even Microsoft network properties like MSN and MSNBC have to function under a new reality that means no more $400 rebates for committing to two years of MSN service. Under Ballmer, this richest of all US companies is doing everything it can to save money.
Yet he’s raising salaries. Part of the new reality is that Microsoft can’t continue to count on stock options to keep its employees happy. So Ballmer is giving raises throughout the middle ranks with the goal of paying Microsoft’s 42,000 employees better than they would be at 60 percent of Microsoft’s competitors.
And Ballmer is investing heavily in two new Microsoft businesses. The first, called .NET (pronounced “dot-net”) is a tech-heavy bet that people and businesses can be convinced to essentially rent their software over the Internet if it is more powerful and easier to use. But .NET means more to Microsoft than just rent payments, since it quite intentionally requires a very Windows-centric server infrastructure that could lead to gains for Microsoft against traditional Big Iron companies like IBM. If .NET works, Microsoft will not only have more deterministic revenue, it will finally have made it past Fortune 500 desktops and into those companies’ even more lucrative computer rooms.
The second new business for Microsoft is Xbox, the company’s first-ever video game system, to be introduced later this year, going head-to-head with Sony’s PlayStation 2. Xbox, which also plays DVDs and sure looks like a PC on the inside, is Ballmer’s bet just in case the home computer as we know it today goes out of fashion. It doesn’t hurt, either, that video games are a $16 billion market that’s brand new for Microsoft and not likely to cause a stir at the Department of Justice.
Ah, the Department of Justice. One reason Ballmer probably has the top job at Microsoft is because of the DOJ. No matter what is the final outcome of the antitrust case, Microsoft’s youth is gone, as is much of the cachet of Bill Gates, who came across in his video deposition as arrogant and evasive. In the last year dozens of Microsoft executives have found it not so much fun anymore to be in the software business and moved on. One could argue that list even includes Gates, who continues as Microsoft’s Chairman and chief software architect, but Ballmer runs the company.
This rise of Steve Ballmer says as much about Gates as anything. Ballmer’s first office at Microsoft wasn’t even a desk, it was the end of a sofa in Bill’s office. He has always been subservient to Bill, and that subservience has been an important aspect of Ballmer’s success at Microsoft. When they were students at Harvard, Ballmer and Gates competed for the Putnam national mathematics prize and while neither won, Ballmer scored higher than Gates, a fact that neither man chooses to mention.
But Ballmer is much more than just a straight man to Gates. He finished Harvard (Gates didn’t) and went on for a Stanford MBA. Except for a short stint in product management at Proctor & Gamble, Ballmer has spent his entire working life at Microsoft. Over the years he has run nearly every Microsoft division including and for a decade managed what was the company’s all-important relationship with IBM. So Ballmer knows what big companies are like just as he knows Microsoft inside and out.
And Ballmer has guts, once taking an individual business risk that pales by comparison anything accomplished by Gates. In the late 1980s, when Microsoft was approaching the release of Windows 3.0 -- the first version of the software to do more or less what it claimed -- Ballmer borrowed almost $50 million against his Microsoft stock and anything else he owned, using the money to buy more Microsoft shares. Software tycoons don’t do things like this. They don’t buy shares in their company, they sell them. Gates has never bought a single share of Microsoft, but Ballmer did, and that $50 million grew over the following decade to more than $14 billion, earning him the CEO job he has today. Ballmer, more than any other Microsoft employee, is literally invested in his job.
Microsoft just feels different under Ballmer’s direction. Gates was focused inward on dominating by force of will the company’s thousands of programmers, a role he still performs as chief software architect, instilling fear on video or by proxy. Ballmer could never do that and won’t even try, yet he gains much the same result simply by raising salaries. He’s a jock to Gates’s nerd. Ballmer’s focus goes the other direction, toward customers. And in a soft market, paying attention to customers pays off.
The feel of Microsoft may be different under Ballmer, but some things never change. Ballmer talks frequently about his wife and three young children. During one of these stories, the husky CEO referred to himself as being six feet two inches tall. Looking straight into Ballmer’s forehead I knew this couldn’t be true since I am only six feet even, so I called him on it. Then a miracle happened. Muscles popping, tendons straining, Ballmer somehow expanded his body, growing three inches on the spot. If companies reflect their CEO’s, then Microsoft is as competitive as ever.
Reprinted with permission
The H-1B visa program was created in 1990 to allow companies to bring skilled technical workers into the USA. It’s a non-immigrant visa and so has nothing at all to do with staying in the country, becoming a citizen, or starting a business. Big tech employers are constantly lobbying for increases in H-1B quotas citing their inability to find qualified US job applicants. Microsoft cofounder Bill Gates and other leaders from the IT industry have testified about this before Congress. Both major political parties embrace the H-1B program with varying levels of enthusiasm.
But Bill Gates is wrong. What he said to Congress may have been right for Microsoft but was wrong for America and can only lead to lower wages, lower employment, and a lower standard of living. This is a bigger deal than people understand: it’s the rebirth of industrial labor relations circa 1920. Our ignorance about the H-1B visa program is being used to unfairly limit wages and steal -- yes, steal -- jobs from US citizens.
H-1B Explained
There are a number of common misunderstandings about the H-1B program, the first of which is its size. H-1B quotas are set by Congress and vary from 65,000 to 190,000 per year. While that would seem to limit the impact of the program on a nation of 300+ million, H-1B is way bigger than you think because each visa lasts for three years and can be extended for another three years after that.
At any moment, then, there are about 700,000 H-1B visa holders working in the USA.
Most of these H-1B visa holders work in Information Technology and most of those come from India. There are about 500,000 IT workers in the USA holding H-1B visas. According to the US Census Bureau, there are about 2.5 million IT workers in America. So approximately 20 percent of the domestic IT workforce isn’t domestic at all, but imported on H-1B visas. Keep this in mind as we move forward.
H-1B is a non-immigrant visa. H-1B holders can work here for 3-6 years but then have to return to their native countries. It’s possible for H-1B’s to convert to a different kind of visa but not commonly done. The most common way, in fact, for converting an H-1B visa into a green card is through marriage to a US citizen.
H-1B isn’t the only way for foreigners to work in America. They can work to some extent on student visas and, in fact, many student visas are eventually converted to H-1B for those who have a job and want to stay but maybe not immigrate.
Poorly Understood
There is a misconception about the H-1B program that it was designed to allow companies to import workers with unique talents. There has long been a visa program for exactly that purpose. The O (for outstanding) visa program is for importing geniuses and nothing else. Interestingly enough, the O visa program has no quotas. So when Bill Gates complained about not being able to import enough top technical people for Microsoft, he wasn’t talking about geniuses, just normal coders.
I don’t want to pick on just Microsoft here, but I happen to know the company well and have written over the years about its technical recruiting procedures. Microsoft has a rigorous recruitment and vetting process. So does Google, Apple -- you name the company. All of these companies will take as many of O visa candidates as they can get, but there just aren’t that many who qualify, which is why quotas aren’t required.
So when Microsoft -- or Boeing, for that matter -- says a limitation on H-1B visas keeps them from getting top talent, they don’t mean it in the way that they imply. If a prospective employee is really top talent -- the kind of engineer who can truly do things others simply can’t -- there isn’t much keeping the company from hiring that person under the O visa program.
H-1B visas are about journeyman techies and nothing else.
Visa Shuffling
Companies can also transfer employees into the country who have worked for at least a year for the company overseas under an L-1 visa. These, too, are limited by quota and the quota is typically lower than for H-1Bs. Back in the late 1980s when the H-1B program was first being considered it was viewed as a preferable short-term alternative to L-1. It has since turned into something else far darker.
So has the B visa, which is intended for companies to bring their foreign employees into the US for business meetings and trade shows. You’d be amazed how many such business meetings and trade shows last 30 days as companies use B visas to enable foreign employees to work awhile in the United States. I’m told that IBM sometimes platoons workers on B visas, sending them to places like Mexico for a short time then bringing them back across the boarder for another stint.
Tourist visas are also commonly abused even though they specifically prohibit work.
The more interesting question here isn’t which multinational corporations consistently abuse B and tourist visas but which ones don’t, it is so common.
No Labor Shortage
A key argument for H-1B has always been that there’s a shortage of technical talent in US IT. This has been taken as a given by both major political parties. But it’s wrong. Here are six rigorous studies (1, 2, 3, 4, 5, 6) that show there is no shortage of STEM workers in the United States nor the likelihood of such a shortage in years to come.
You may recall a recent column where the IT community in Memphis, TN proved there was no labor shortage in that technology hotbed.
The whole labor shortage argument is total hogwash. Yes, there is a labor shortage at substandard wages.
Can all of this be just about money? Yes.
What are the Rules?
The rules for H-1B visas state that they must be for technical positions for which there is no comparable US citizen available and the position must pay the prevailing wage or higher.
It’s this definition of prevailing wage where we next see signs of H-1B abuse by employers. The intent of the original law was for companies not to use H-1B workers simply to save money. In the enabling legislation from 1990, however, there are two different definitions of the term “prevailing wage.” The first is quite strict while the second, which is used by self-certifying employers to set actual pay scales, has plenty of wiggle room.
Warning, dense reading ahead!
Here is the initial definition of “prevailing wage” in 8 USC 1182(n)(1)A):
- The employer
(i) is offering and will offer during the period of authorized employment to aliens admitted or provided status as an H–1B nonimmigrant wages that are at least
(ii) the actual wage level paid by the employer to all other individuals with similar experience and qualifications for the specific employment in question, or
(iii) the prevailing wage level for the occupational classification in the area of employment,
And here is the redefinition of “prevailing wage” in 8 USC 1182(p)(4):
(4) Where the Secretary of Labor uses, or makes available to employers, a governmental survey to determine the prevailing wage, such survey shall provide at least 4 levels of wages commensurate with experience, education, and the level of supervision. Where an existing government survey has only 2 levels, 2 intermediate levels may be created by dividing by 3, the difference between the 2 levels offered, adding the quotient thus obtained to the first level and subtracting that quotient from the second level.
Note that section (p) requires that the Department of Labor set up four prevailing wage levels based upon skill but section (n) only requires a prevailing wage for occupation and location. There is no statutory requirement that the employer pick the skill level that matches the employee.
Let’s see this in action. According to Bureau of Labor Statistics data, the mean wage for a programmer in Charlotte, NC is $73,965. But the level 1 prevailing wage is $50,170. Most prevailing wage claims on H-1B applications use the level 1 wage driving down the cost of labor in this instance by nearly a third.
If you were casually reading the statutes, by the way, you would never see this redefinition. That’s because section (p) does not refer to H-1B but rather to section (n) which is referenced by 8 USC 1101(a)(15)(H)(i)(b).
Got that?
Greed gone Wrong
But wait there’s more!
It’s not hard to suppose from this information that an influx of H-1B workers representing an average 20 percent of the local technical work force (those 500,000 H-1Bs against a 2.5 million body labor pool) would push down local wages. There’s plenty of anecdotal evidence that it does, too, but most of the more rigorous academic studies don’t show this because there is no easily available data.
What data is available comes from the initial employer applications for H-1B slots These Labor Condition Applications, called LCAs, include employer estimates of prevailing wages. Because there are always more H-1B applications than there are H-1B visas granted, every employer seeking an H-1B may file 3-5 LCAs per slot, each of which can use a different prevailing wage. But when the visa application is approved, it is my understanding that sponsoring companies can choose which LCA they really mean and apply that prevailing wage number to the hire.
Because the visa has already been granted of course they’ll tend to take the lowest prevailing wage number, because that’s the number against which they match the local labor market.
Remember that part of this business of getting H-1Bs is there must not be a US citizen with comparable skills available at the local prevailing wage. If we consider that exercise using the data from Charlotte, above, a company would probably be seeking a programmer expecting $73,965 or above (after all, they are trying to attract talent, right?) but offering $50,170 or below (the multiple LCA trick). No wonder they can’t get a qualified citizen to take the job.
Based solely on approved LCAs, 51 percent of recently granted H-1B visas were in the 25th percentile for pay or below. That’s statistically impossible under the intent of the program.
We have no clear way of knowing what companies actually pay their H-1Bs beyond the LCAs, because that information isn’t typically gathered, but remember that whatever level it is won’t include benefits that can add another 30-40 percent to a US citizen’s wage.
Extent of Abuse
Here is the Government of India touting its H-1Bs as cheaper than US workers, which of course they aren’t by law supposed to be.
I wish this was the extent of abuse, but it isn’t. A 2011 Government Accountability Office study found that approximately 21 percent of H-1B visas are simply fraudulent -- that the worker is working for a company other than the one that applied for the visa, that the visa holder’s identity has changed, that the worker isn’t qualified for H-1B based on skills or education, or the company isn’t qualified for the H-1B program.
H-1Bs, even though they aren’t citizens or permanent residents, are given Social Security numbers so they can pay taxes on their U.S. income. A study by the Social Security Administration, which is careful to point out that its job doesn’t include immigration monitoring or enforcement, found a number of H-1B anomalies, the most striking of which to me was that seven percent of H-1B employers reported no payments at all to H-1B visa holders. This is no big deal to the SSA because these people qualify for no benefits, but it makes one wonder whether they are under-reporting just Social Security or also to the IRS and why they might do so? Those H-1B employers who do report Social Security income do so at a level that is dramatically lower than one might expect for job classifications that are legally required to pay the “prevailing wage.”
Maybe at this point I should point out that the H-1B visa program is administered by the Department of Homeland Security. Feel better?
One defense of H-1B might be that it raises overall skill levels, but studies show H-1B employees to be consistently less capable than their US citizen counterparts. This data point is especially interesting because it is drawn from the LCA data where applying companies claimed that 56 percent of H-1B applicants were in the lowest skill category and could therefore be paid the least. So at the same time companies are claiming they need the H-1B program to bring in skilled workers, the workers they are bringing in aren’t very skilled at all. Or if they are skilled, then the sponsoring companies are fudging their paperwork to justify paying lower than market wages.
Either truth is damning and the latter is downright illegal.
Here’s where I’ll give a shout-out to the Libertarian contingent reading this column because they’ll tend to say “So what? It’s every man or woman for himself. Employers should be able to do whatever they damned well please while workers can always go elsewhere.”
But it’s against the law.
Lawyer's Perspective
At this point a longtime reader of this column speaks up:
I have been a practicing immigration attorney for over 13 years. I have done many H-1B visas and like any other government program it was loaded and is still loaded with abuses… In my opinion, employers who need H-1B Visa workers should have to go through a screening process before they are allowed to submit the application and a bond should be posted if they violate the law.
For a large multinational corporation to play this game is not new. The reason that they carry on with these activities are for one reason only — control. Control of the employee and uneven bargaining at the end of the day. I have dealt with this with different multinational corporations… and they have, can and will act in the same manner. As always, it takes either an investigation by the USDOJ or massive fines (or both) to redirect bad behavior to federal compliance.
Even if I wasn’t at ground zero in this stuff, it would still bother me,” wrote another longtime reader who has spent his entire career in IT. “Our country spent decades learning to treat workers fairly and with respect. The driving force behind unions in the first place was to address serious problems in the workplace. With all this offshoring and H-1B crap, we’ve dumped 100 years of improving society down the drain. Maybe USA workers do cost too much. The problem is we are not fixing the actual problem. As more and more jobs go off shore, the damage to our economy grows. If we would fix the problemsthe playing field would be more level and USA workers could compete for jobs. These abuses by corporations are not only hurting USA workers, they are hurting our nation.
Reprinted with permission
Photo Credit: wrangler/Shutterstock
I struck a chord with my recent column on H-1B visa abuse, so soon follow up with an enormous post that tries to explain the underlying issues. But before then here’s something I came across that doesn’t quite fit that theme but was too interesting to let pass unnoticed -- how companies like IBM intimidate employees and discourage them from speaking up.
A few years ago there was a class action lawsuit against IBM. Thirty-two thousand server administrators were being forced to work overtime without extra pay. IBM lost the suit and paid a $65 million settlement. That’s just over $2,000 per affected employee before the lawyers took their share. Then IBM gave all those workers a 15 percent pay cut with the justification they’d get it back in overtime pay. Next IBM restricted the workers to 40 hour weeks so there would be no overtime.
VP approval was required each time someone needed to work overtime. The net result was all the server admins worked exactly 40 hours a week and for 15 percent less pay. I’m told by some of those IBMers involved that they were then put at the top of the layoff list. At the end of their severance pay period after being laid off many were rehired as contractors -- for less money and no benefits. At that point they were at 50-60 percent of their original pay. Eventually most of those jobs were shipped overseas.
One could argue, of course, that nobody forced IBM server administrators to stay with a company that would treat them that way, and I think that’s strong argument. But I’m a guy who was fired from every job I ever held and so may not be the right person to judge proper employee or employer behavior.
Reprinted with permission
Photo Credit: HomeArt/Shutterstock
As anyone with a heartbeat knows, Apple has a product event coming on Tuesday the 23rd in San Jose at which we’ll certainly see the iPad mini, perhaps a new MacBook Pro and maybe some new iMacs. But whatever is being introduced I think it’s fair to say that the event is still in flux, because Apple late Wednesday canceled another corporate event in Arizona scheduled for the same time, this one at The Phoenician resort.
Apple booked the entire hotel (600+ rooms) for Sunday through Wednesday. Their setup people were on site Tuesday. Late Wednesday, as setup was nearing completion, Apple told the resort that they “wanted all of their managers to be on site in their stores next Tuesday for the upcoming tablet release” -- that they were canceling the function.
There are two bits of information here:
Why else would the store managers need to be there?
So why the flux-up? The first we heard about a tablet event it was supposed to happen on the 17th, not the 23rd. My guess is that the ship date for those million or more iPad minis slipped a week and nobody thought to tell the people getting ready for Arizona.
That’s the way things work sometimes in big companies.
Photo Credit: The Phoenician
I’ve been away. We had a death in the family (my brother-in-law) which turned me into a single parent for a few days -- a paralyzing experience for an old man with three small boys and two large dogs. You never know how much your spouse does until it all falls for awhile on your shoulders. I am both humbled and a bit more wrinkled for the experience.
While I was being a domestic god a reader passed to me this blog post by John Miano, a former software developer, founder of The Programmers Guild, now turned lawyer who works on immigrant worker issues as a fellow at the Center for Immigration Studies (CIS) a supposedly nonpartisan think tank in Washington, DC. I don’t know Miano and frankly I hadn’t known about the CIS, but he writes boldly about H-1B visa abuses and I found that very interesting.
Here’s what I found to be the important section of the post:
An American IBM employee sent me an e-mail chain among the employee, IBM hiring managers, and IBM HR that shows how IBM flagrantly violates the law in regard H-1B usage and immigration status discrimination.
First a little background. IBM has a built-in source to import foreign labor. IBM’s Indian subsidiary (IBM Global Services India) is one of the largest importers of foreign workers on H-1B visas.
When IBM is staffing projects in the United States it can hire locally or use imported labor on H-1B visas provided by IBM India.
Now let me set the stage for the e-mail chain. An American IBM employee in the United States had been working on a software development project for a customer that had recently ended. The employee needed to find another project to avoid being laid off, as it is easier to lay off people who are not working on projects.
The American IBM employee was on an internal IBM mailing list for employees who were available for a new project. The IBM employee received a timely mass e-mail through this list from IBM HR with a job description that started out:
We are urgently seeking Business Analyst resources with Test experience for two positions on the Alcatel-Lucent account.
A lengthy job description and instructions on how to apply followed this introduction. (IBM uses the term “resource” throughout to refer to employees.) The job was located in the United States and the American IBM employee lived close to the project.
The American IBM employee responded to the job posting with a cover letter explaining how the employee’s qualifications matched the posted job requirements, the additional information requested in the job posting, and a resume.
This is the IBM hiring manager’s complete response to the American IBM employee’s application (The IBM employee provided translations of acronyms that I have indicated in square brackets.):
Thank you for your interest in the eBusiness Analyst position on the Alcatel-Lucent account. We are in the process of gathering resumes for this position and will send you a follow-up response once we have had an opportunity to review your qualifications.
Please understand the clients first preference is IGSI [IBM Global Services India] landed resource, then local US candidates, then remote, so these candidates will be in the second group to be considered. (sic)
This manager was forcing Americans to get in line for jobs behind “landed resources” from IBM India. In case you are wondering -- yes, this is illegal. See 8 USC § 1324B.
So how can IBM so flagrantly violate the law?
The reason IBM can get away with this disgraceful behavior is that discrimination enforcement requires a complaint. An employee considering a complaint has to weigh the probability of the government prosecuting the case and winning adequate compensation against the risk of retaliation and damage to his or her career. Many companies make severance packages contingent upon employees signing away rights to file such a complaint.
At this point, I am sure the IBM public relations folks reading this posting to formulate their response are thinking to themselves “Rogue hiring manager. IBM does not have a policy of discrimination.” Read on.
The American IBM employee forwarded the e-mails to IBM HR and attached the following complaint:
You included these two positions – below – again into today’s email to “Available”. Per below, they are NOT looking at Americans. Pretty clear.
You would think that IBM HR, upon learning of unlawful discrimination, would disavow the actions of its hiring manager and take decisive corrective action.
Instead, IBM HR actually responded by explaining to the American employee why IBM violates the law:
There are often US Reg [U.S. Regular] seats that also have landed GR [Global Resource] seats open – sometimes the customer will take either as long as they are working onsite – and the cost difference is too great for the business not to look for landed GRs or to use them if they are a skills match.
There you have it, straight from the IBM HR department. Foreign workers, global resources supplied by IBM India, are so cheap compared to Americans that it is worth violating the law.
This is Bob again, pointing out that the H-1B program specifically does not allow saving money to be an acceptable reason for granting such visas which can only be used, supposedly, for finding workers with skills that are literally unavailable in the domestic work force.
Miano gives half of a good reason why this sort of abuse can happen, that there generally aren’t specific complaints filed against it. I might go further and speculate that there aren’t complaints because IBM’s domestic work force is too intimidated to file them.
What happened to Respect for the Individual?
Here’s what happened: IBM has no fear of the U.S. legal system.
This hearkens back to my last column about regulatory abuse. IBM has the largest internal legal department of any corporation anywhere. IBM has more lawyers on staff than most governments. And IBM’s legal department has been over the years a great profit center, especially through enforcing intellectual property rights. If you decide to sue IBM for violating your patent, you can be sure their first response will be to find half a dozen or more IBM patents that you might have infringed, too. Just the threat of protracted legal action is enough to make most such problems simply go away, IBM is so aggressive.
And so we’ve reached a point where, as this Miano post describes, IBM appears to not even pretend anymore to be in compliance with H-1B immigration law. Why should they?
Reprinted with permission
First in a series. Thirty years ago, when I worked for a time in Saudi Arabia, I saw a public execution. I didn’t attend an execution, I didn’t witness an execution, I just happened to be there. There was in the center of this town a square and in the square were gathered hundreds of people. I worked in a building next to the square and looked out the window to see what caused all the noise. At that moment a prisoner was brought forward, his arms bound behind him. He was dragged up the steps to a platform and there fell to his knees.
Another man, whom I quickly came to understand was the executioner, climbed to the platform with the prisoner and poked him in the side with a long curved sword. The prisoner involuntarily jerked up just as the sword slashed down and just like that there was a head rolling off the platform, the body falling dead like a sack of flour. The crowd roared. Beginning to end it took less than a minute.
This was Bedouin justice. Nomadic societies have no jails so their justice systems tend to be pretty simple with punishments generally limited to the loss of wealth or body parts. Convicted criminals for certain crimes in Saudi Arabia first lose one hand then two if they repeat the crime. Other crimes go straight for the head. I don’t know what this guy did back in 1982, but I remember he had both hands. It’s a cruel and arbitrary system but you know where you stand in it and convicted criminals are fairly easy to spot.
I bring up this image because this is the first of two or three columns about law and regulation, how systems do and don’t work, and what can be done to make them better. Bedouin justice circa 1982 is our baseline and it works pretty well for simple crimes, though maybe not so well for multinational corporations.
The inspiration for this column is a recent blog post by David Rubens, a security consultant in the United Kingdom. It’s a bit dense but if you fight your way through the post it makes pretty good sense about why business regulation (or any regulation for that matter) doesn’t seem to work very well these days.
Rubens writes of Game Theory and specifically multiple iterations of The Prisoners’ Dilemma problem, which has to do with how risk decisions are made by organizations involved in dynamic systems like business. Here, with some light editing by me, is the nut paragraph:
...when it comes down to the relationship between regulators and those being regulated… the ability of the regulated organization to maximize personal benefit is based on the ability to predict what the other side (the regulators) will do in response to the two options (which are) cooperate (play nicely) or betray (screw the customer). Given that in almost all cases the regulatory body has less funds, personnel, resources and expertise than the organization it is regulating, then it becomes clear that there is little to be gained in the long run by cooperating or playing nicely, and much to be gained by ignoring the regulator and developing a strategy that focuses purely on maximizing its own personal benefit. This is not an issue of ‘right’ or ‘wrong,’ but purely, in its own terms at least (maximization of profit, increased market share, annual bonuses, career prospects), of whether it is ‘effective’ or ‘ineffective.
Rubens’s point, then, and I think it is a good one, is that absent some guiding moral principle usually embodied in a leader, the more powerful an organization the more it will act in its own self interest even if (especially if) that interest is in violation of regulations or laws. You can have a strong leader who says "We’re going to play fair", and that changes the picture. but if the leader (strong or weak, but running a powerful organization) says, "our only job is to maximize shareholder return" then rules and eventually laws will be broken to make that happen.
This makes us look again at the political argument that comes up again and again about whether free markets can be left to themselves or whether they should be regulated. I’m not attempting to answer that question here, by the way, because in practical terms it is the wrong question. The better question is in the business and regulatory structures we have now, does financial regulation even work?
Rubens says "no, regulation doesn’t work", and I agree.
End of argument for some, who would then go on to say that since regulation doesn’t work then we shouldn’t bother with it. “Let business do its job.”
Except there are instances like protecting the old and weak where even those who oppose regulation see some advantage to it. So in order to cope with those instances, we have in recent years come to talk less about deterrents and more about rewards. Most of the regulatory responses to the financial collapse of 2008 were in the form of incentives. Instead of going to jail, the perps tended to be deemed too big to fail and actually rewarded for most of the bad things they’d already done.
Nearly every violator, even if they paid millions in settlements and fines, ended up financially ahead for having broken the rules.
What’s key here is that there’s a dual system. If you are powerful enough, you are too big to fail. If you are weak enough, you are too small to matter. In the 1980s the popularity of three strikes laws worked to displace petty criminals at immense cost to the system and to society. Three Strikes worked to some extent, so in that respect it was an effective policy with bad side effects. Yet nobody has proposed applying Three Strikes to these civil crimes.
Why not? Can’t we find organizations that have be caught doing similar offenses three or more times? If we had Three Strikes for big banks, for example, most of them would be out of business.
What would be wrong with that? Hundreds of banks are dissolved by the FDIC every year. There’s nothing sacred about a bank.
You may notice a pervasive theme in public discourse that government is too big and ought to be made smaller, that regulations ought to be simplified or removed altogether.
Trust us.
Yet with a weak government there is only one way to have successful deterrents, which is by making them brutal. Bedouin justice is the answer for efficient financial regulation.
One judge, one sword. Float some mortgage backed securities that you rate AAA but know will fail; manipulate the LIBOR; fix commodity prices; backdate your stock options; lose a hand.
Reprinted with permission
Photo Credit: ARENA Creative/Shutterstock
This is my promised update on bufferbloat, the problem I write about occasionally involving networks and applications that try to improve the flow of streaming data, especially video data, over the Internet but actually do the opposite, defeating TCP/IP’s own flow control code that would do the job much better if only it were allowed to. I first mentioned bufferbloat in January 2011 and it is still with us but the prognosis is improving, though it will probably take years to be fully resolved.
If you read my last column on LagBuster, you know it’s a hardware-based workaround for some aspects of bufferbloat aimed especially at gamers. LagBuster is a coping strategy for one type of bufferbloat that afflicts a population of people who aren’t willing to wait for a systemic cure. LagBuster works for gamers and might be a workaround for other kinds of low-latency data, but that’s still to be determined.
Where is the Problem?
One thing we learned from that column was that bufferbloat isn’t peculiar to routers: where there are separate broadband modems those are affected too. Here’s a very telling response from LagBuster co-inventor Ed DeWath:
The problem isn’t the router. The problem is in the modem.
Let’s use your example -- you have a fantastic router with DD-WRT and the best QoS in the world. Your router detects a high #1 priority packet and places it at the top of the egress queue so that the high priority packet immediately exits the router at the "front of the line". Wonderful! Next, the high priority packet immediately enters the "end of the line" of the modem’s single adaptive rate buffer (ARB). Well, modems have no idea about QoS or prioritization, so your high priority packet must wait in queue behind ALL the previous packets before it exits to the Internet. All packets must wait for their turn -- there is no jumping to the front of the line in a modem! Put another way, modems remove any semblance of "prioritization" from packets -- all packets are equal in a modem. Democracy in action!
That said, the LagBuster solves the modem lag problem by precisely matching the ingress and egress flow rates, and limiting the modem to only one packet in the ARB at any time. Further, with LagBuster’s dual buffer, the high packet priority status is retained. With no other packets in the ARB to cause queuing delays, the LagBuster can always send high priority packets from the high priority buffer at maximum speed.
Lastly, the dwell time ("lag") in the modem is basically proportional to the modem’s buffer size and upstream bandwidth. A consumer grade modem may add up to hundreds of milliseconds of delay to the packet stream. For example, a typical modem buffer is about 300KByte, and a typical upstream speed is about 5Mbps, which can result in as much as 500ms delay for a packet flow. QoS routers cannot ever solve that delay. LagBuster does.
A lot of information there that the smartypants contingent in this audience can argue about all day ("But my modem is combined with my router" -- Really? Is it logically integrated or just living in the same plastic box? How do we even know?) but it is very significant to me that there are bufferbloat issues peculiar to broadband modems. I talk with the best bufferbloat experts in the world and some of them hadn’t devoted a single neuron to the modem buffer, yet from what Ed says, above, ignoring the modem could hobble an overall bufferbloat solution.
Open-Source Answers
I’ll detail below what’s happening to cure bufferbloat, especially in the Linux community, but first I’d like to do something uncharacteristic, which is to make a call to action to a couple specific vendors.
The golden age of the music business (note my italics, I’m not calling it the golden age of music, but a great time to make a lot of money) was in the 1980s and 1990s when we all converted our vinyl record collections to CDs, paying anew for music we already owned. People who sell stuff love it when a technical change requires we get all new stuff. The advent of digital television, for example, sparked a boom in flat screen TVs that is only now turning into a busted bubble, but not until a lot of money was made.
Bufferbloat offers just such an advantage quite specifically to Cisco Systems and to the Motorola Mobility division of Google. Both companies make cable and DSL modems as well as routers. Other companies make these, too, but Cisco and Google can claim bigger network infrastructure shadows than any of those other companies. Cisco dominates the service provider end of the business while Motorola and Google have the most influence on the consumer side, even more than Microsoft or Apple.
Each of these companies should love to sell us all over again a hardware/software solution that eliminates bufferbloat and makes our networks sing. There is no reason why every broadband connection in America can’t have 100 millisecond latency or less. There is no good reason why this can’t be implemented in a year or two rather than 5-10 if these companies will make doing so their marketing priorities.
Everybody wants this and everyone who understands the issues seems willing to pay a bit to make bufferbloat go away. That means it is a terrific commercial opportunity, yet I don’t see much action. Some of this comes down to proprietary vendors not wanting to expose what they are doing, but much of it comes down to misplaced priorities or simple ignorance on the part of industry executives. I wonder if John Chambers at Cisco or Larry Page at Google have even heard of bufferbloat?
This is not to say that Google is doing nothing about bufferbloat. Current state of the art in defeating bufferbloat is an Open Source project called the CoDel AQM algorithm originally by Kathie Nichols and Van Jacobson, but the lead developer (with at least five others) Eric Dumazet has just joined Google. Dumazet, who is French, is responsible for codel, fq_codel, and TCP Small Queues if you want to go poking around.
CoDel is now an official part of Linux.
Commercial Response
Linksys, after a long period of stagnation and outsourcing, is in the process of being more brought in-house. Cisco has contributed toward analyzing the behavior of several revisions of the codel algorithm, I’m told, but is apparently still unconvinced that codel is the bufferbloat solution.
Apple is reportedly well along in its own bufferbloat solution but, as usual, there is no real news escaping from Infinite Loop.
For those who want to reflash their routers and experiment, Cerowrt 3.3.8-27 "sugarland" runs some advanced forms of codel on all interfaces including WiFi, which is the area where the most work still has to be done.
In the oddest yet also most encouraging bit of news, codel over WiFi is right now being tested on a special network built where no other WiFi networks impinged -- in this case at a clothing-optional resort in Northern California, of course.
"Still, tons of work do remain on everything" reports one of the guys in the thick of it, whose name I am witholding to protect his innocence. "Head ends and cable modems are particularly dark to us, the home gateway vendors are asleep at the switch, ISPs clueless, pipelines long, and the biggest problem is that ALL the chipset vendors for Customer Premises Equipment (CPE) have moved the fast path for IP into hardware, rather than software, so it’s going to be hard to change course in the next three years unless some disruptive CPE maker shows up and leverages an ARM chip…"
Hint. Hint.
Photo Credit: nmedia/Shutterstock
If you are a serious gamer you need LagBuster.
Lag is mainly upstream (you to the game server), while bufferbloat is mainly downstream (video server to you). Bufferbloat is caused by large memory buffers in devices like routers and in applications like media players messing with the native flow control in TCP/IP. We add buffers thinking it helps but instead it hurts. Something similar happens with lag but it tends to happen at the point where your 100 or 1000 megabit-per-second local area network meets your 3-25 megabit-per-second DSL or cable Internet connection. Lag is caused by congestion at that intersection. You can tell you have lag when you can’t seem to be able to aim or shoot fast enough in your shooter game. It’s not you, soldier, it’s the lag.
The cure to lag, we’re generally told, comes in two forms: 1) you can get a faster Internet connection, or; 2) you can implement Quality of Service (QoS) in your router. But according to my old friend Ed DeWath, who makes the $220 LagBuster, neither technique really works.
Just think of all the hardened gamers who are paying two or three times more each month for a super-fast Internet connection that isn’t really helping their game play.
Game signaling takes kilobits, not megabits-per-second. Yes, a faster Internet connection will empty your router cache faster, but not faster enough. Packets still back up in the cache and eventually time out, requiring a retransmission that just adds to congestion. Think of it as one of those freeway onramps with metering lights except that every few clock ticks all the waiting cars are disintegrated with a laser beam as the cache is flushed and a request is sent out for more cars, most of which will be blasted yet again.
Quality of Service is supposed to help and it might, a little, but not a lot simply because it’s a serial cache to which the QoS is being applied. That is the packet you really want to get through fastest is at some point at the back of the line. How do you get it to the front of the queue with all those other bits in the way? Why you blow those to smithereens, too, which takes time and produces further congestion.
The LagBuster is a box that sits between your DSL or cable modem and your router. In the LagBuster is not one buffer but two. Think of it as that metered freeway onramp but with the addition of a diamond or carpool lane that is the second parallel buffer. Network data packets leave the router and enter the LagBuster where they are sorted into game and non-game packets, each of which type gets its own parallel buffer. Game packets on their diamond lane never stop but go straight through into the modem while non-game packets are stored in their buffer and released as the modem is able to accept them. In both cases the idea is to keep the buffer in the modem nearly empty so TCP/IP flow control can operate.
Because there are two memory buffers in parallel rather than the single buffer in the typical router, game packets at the back of the queue are transferred unimpeded by the LagBuster, much faster than using QoS.
The LagBuster eliminates game lag completely, giving those who have one a decided advantage that’s completely independent of total bandwidth. Presumably it could be used to accelerate other packet types, too, but for now the LagBuster is aimed strictly at games.
I like the LagBuster because it is very clever but also because it is made in a factory in Fremont, California with the plastic case made in Alabama. Building it in China, DeWath tells me, would have been more expensive.
Reprinted with permission
Photo Credit: Jaimie Duplass/Shutterstock
As I’ve written many times before, small companies and especially new companies are what create nearly all of the net new jobs in America, yet a new study released last week by the Hudson Institute suggests the rate of job formation by new firms is down dramatically in recent years, from an average of 11 new startup jobs per 1,000 workers at a peak in 2006 down to 7.8 new startup jobs per 1,000 workers in 2011 -- a 29 percent decline. So is the startup economy losing its oomph and should we be worried? No the startup economy isn’t losing its oomph but yes, it’s time to worry.
The Hudson Institute study was written by the think tank’s chief economist Tim Kane. He notes with concern this downward trend in startup job formation but his study doesn’t attempt to explain it, leaving that for the future. He’s not above, however, mentioning the likely negative impact of increased regulation, especially from the impending Affordable Care Act, AKA Obamacare.
There’s a lot to think about here and a lot of good research yet to be done, but I know more startup founders than the average Joe and I don’t think many of the founders I know factored Obamacare, with its 2014 inception date, into their 2011 business decisions.
There are, I think, two much more significant effects being felt here. One is the nature of job formation evolved over time and labor statistics haven’t yet evolved to keep pace. The other effect is the simple unavailability of credit despite low interest rates.
One might expect entrepreneurship to be rising in the United States, especially with lower fixed costs for modern service-based startups, as well as other advantages, such as higher levels of human capital, higher incomes, and the rising availability of funding through bank and venture capital.
These words are from the short preface to Tim’s paper. Let’s consider each of his points in turn:
1. Lower fixed costs for modern service-based startups. This is true and is one of the significant effects I mentioned above. If your new startup is built entirely in, say, Amazon’s cloud, there’s a lot less to be done in terms of logistics and infrastructure, so you probably save a job or two, which could account for most of the effects noted in the study.
2. Higher levels of human capital. I think this means more qualified people looking for work which is again true despite the H1B lobbyists claiming otherwise.
3. Higher incomes. Whose? Is your income higher so you’ve decided to start a new company? Certainly there are parts of the population that are doing better than ever. It would be interesting to learn whether those people are the ones starting new companies.
4. The rising availability of funding through bank and venture capital. Say what? This is simply not the case. Venture capital represents a relatively small proportion of the funding base for new company formation. Most new company founders in America have never met a venture capitalist. And while it is true the banks are stuffed to the rafters with cash to loan thanks to easy terms from the Federal Reserve, they simply aren’t lending money to entrepreneurs, who tend to be people with big dreams and small credit ratings.
What’s at work here to stifle company formation is lack of credit combined with some increases in efficiency that allow companies to do more with less.
Part of the issue is the definition of startup, which to our crowd means new technology companies but to the Bureau of Labor Statistics also means new Arby’s franchises. The current credit crunch makes this a particularly bad time to be getting into the roast beef sandwich game.
It’s this lack of credit that should worry us because the Obama Administration has shown no ability whatsoever to fix this problem, though the JOBS Act may finally help when it kicks-in next year.
Tim Kane is my friend, his report is interesting as far as it goes and it should spark a lot more inquiry from economists, but I think what it mainly represents is a love letter to Presidential nominee Mitt Romney, trying to give some economic ammunition to a candidate on the ropes.
Reprinted with permission
Photo Credit: Catalin Petolea/Shutterstock
Let’s everybody beat up on YouTube for not pulling that offensive anti-Muslim video that is infuriating people around the world. No, wait. As disturbing as this story is let’s instead take a moment to try and figure what’s really happening and why YouTube and its parent Google are behaving this way.
It’s easy to blame Google’s algorithmic obsession for this mess, but I don’t think that’s at work here at all. Yes, Google is very good (which means very bad in this case) at blaming one algorithm or another for pissing-off users. Google customer support is, in a word, terrible for this very reason, and it often seems like they don’t even care. But this case is different, because it has less to do with algorithms than it has to do with intellectual property laws.
Google lives and dies by its IP and YouTube in turn lives and dies primarily by the Digital Millennium Copyright Act (DMCA), specifically the Safe Harbor provision of that act that allows YouTube to simply pull infringing content on the demand of the IP holder rather than have to pay a $25,000 penalty as they’d do in, say, Australia.
But the DMCA Safe Harbor provision comes with certain rules that require a generally hands-off approach to content censoring by the carrier, in this case YouTube. The DMCA puts the onus on the IP holder to tell YouTube (and all YouTube competitors) to pull down infringing content. We do this every day, by the way, with pirate copies of Steve Jobs — The Lost Interview. Without eternal vigilance my children won’t be able to afford college.
YouTube maintains that the video in question doesn’t violate its basic rules for inclusion, so they can’t (or won’t) bring it down. Making an exception might set a legal precedent, their lawyers are worrying, and threaten the Safe Harbor. It would also lead to an infinitely expanded problem of people demanding YouTube pull videos just because they find them offensive.
I don’t think it is extreme at all to suggest that if YouTube sets a precedent pulling this anti-Muslim video that Kevin Smith’s movie Dogma, for example, could come under attack by Catholic groups, as it did when that movie was released.
But notice that YouTube has pulled the offending video from Egypt, India and Libya. That’s just a confirmation of the DMCA-compliance strategy described above. YouTube can pull the video in those countries because the DMCA is meaningless there. It’s in the USA and other countries with similar laws that the video has to stay up in Google’s view.
It’s not that they are deliberately being pricks about this, their lawyers are telling them to do it.
But there’s a logical endgame here and I find it interesting that it hasn’t already been played.
More than 30 years ago when I was working as an investigator for the Carter White House I butted heads with AT&T, seeking phone records. Lawyers for Ma Bell said they couldn’t give me what I asked for because it would violate privacy provisions of their customers. But they’d be happy to comply if I’d just get a proper subpoena or a court order, which I did. AT&T lawyers, in fact, gave me a sample subpoena to use.
Something similar is happening here I’m sure. President Obama has asked YouTube to take down the anti-Muslim video just as I asked AT&T for phone records, but they’ve demurred (just as AT&T did) for very specific legal reasons. The key distinction here is that President Obama and I asked for compliance.
I’m fairly certain that Google would comply with a Presidential order, because such an order would be written to indemnify YouTube under the DMCA. To do this properly they have to be made to do it, and have probably expected that all along.
So the bigger question is why hasn’t the President issued such an order? There could be any number of reasons for that -- everything from not wanting to look weak to other nations to potential ramifications for the upcoming election. My best guess is the White House is trying to get the producers of the movie to pull it, themselves, so far without success.
I have no insight into Presidential logic here. But if this crisis develops much further I’m quite confident we’ll see in the news a Presidential order.
Update: From reader comments to the post on my blog, many people think I advocate some specific behavior from either YouTube or President Obama. That’s not true. I’m not proposing that either do anything. I just explain what I believe is happening and why, which is pretty much all I ever do around here if you haven’t noticed. I’m neither trying to hobble the First Amendment nor take any political or religious stand whatsoever. If people think I am doing ether, well they aren’t reading very carefully at all, because it simply isn’t in there. So settle down, everyone.
Reprinted with permission
Photo Credit: Linda Bucklin/Shutterstock
I’ve been told the new faster-bigger-but-lighter-and-thinner iPhone 5 has a Thunderbolt interface. The press has correctly picked up on the fact the cables and connectors are different. They haven’t, however, figured out Thunderbolt is not USB. I guess we can expect the next round of iPads to use Thunderbolt too.
If it is Thunderbolt (I haven’t been able to confirm) you have to wonder why? In one sense this may just be Apple wagging the market because it can, but what if they really need a 10 gigabit-per-second interface for something? And what could that something be?
I’m a little confused by the ho-hum response in some quarters to this new phone, which appears to me to equal or exceed the specs of any phone currently on the market. That’s not good enough?
Yes, Android licenses are piling up at a rate of 1.3 million per day according to a friend of mine who sits on the Google board. And yes, Jelly Bean is a big improvement if you can find a phone that has it.
But it’s hard to compete with free.
I suspect the iPhone 5 and iPhone 4 pricing will blow a hole in the Android market. Samsung has serious competition for its top end phone, the Galaxy S III.
It is always interesting to hear the what college kids think. What I am hearing is they are tired of the quirkiness and hassles with Android.
Is an older model of iPhone better than a modern Android phone? Does the bigger display on the S3 offset the new engineering and known quality of Apple’s products?
And where does this leave Microsoft? That last question may not matter.
Editor: Please give us your answer.
Reprinted with permission
Following Amazon's Kindle Fire HD announcement, a reader reminded me of a prediction I made at the start of the year: "If Apple gives up its position of industry leadership in 2012 the only company capable of assuming that role is Amazon.com". I stand by those words -- Amazon is really bringing the fight to Apple -- but the most important part is "if Apple gives up its position", which it clearly hasn’t, at least not yet. The real loser here, in fact, is not Apple but Microsoft.
I could be wrong about this but I don’t recall any pundits (me included) predicting that Amazon would introduce a larger format tablet, yet that’s exactly what they did. The larger Kindle Fire HD with its built-in content and app ecosystem (and that killer 4G data package!) is a viable iPad competitor at a terrific price and puts real pressure on the Cupertino, Calif.-based company. Will Apple match the price? I don’t think so. That’s not the game they want to play. But the game is on, nevertheless, and users can only benefit from competition.
Amazon copies Apple’s playbook, page-by-page, and does so beautifully. The innovation they bring is the lower price point. It’s a Toyota/Ford/Buick strategy to Apple’s Lexus/Lincoln/Cadillac. It’s traditional Microsoft positioning, but Amazon has the further advantage of not having to develop (or even pay for) an operating system. So the real loser here isn’t Apple, it’s Microsoft. The tablet market was already broadening, flattening and moving down market before yesterday’s Amazon announcement. This just makes it all the more difficult for those Windows RT tablets coming shortly to differentiate themselves in a market with thinner and thinner margins.
What’s an Asus or an Acer or even a Dell to do? $299? Shit!
Apple is the innovation leader. What, after all, has Amazon (or Microsoft) actually invented here? Nothing. They’ve emulated Apple at a lower price point and to some extent disintermediated the mobile carriers. So just as Apple had to accept competition in the MP3 player market, the company will have to adapt to the Kindle Fire HD. And I predict they’ll do so by raising the bar and then by inventing a whole new bar just as they have done in the past.
Amazon isn’t taking the design lead away from anyone, at least not yet.
The market loves this, of course, and rewards Amazon for its audacious behavior. What would Apple be worth, for example, if it could claim an Amazonian P/E? Apple would be the first $1 trillion company.
So the heat is on for both Apple and Amazon. Apple has to innovate itself out of a pricing corner. Amazon has to show increased earnings despite what must be razor thin margins. Both companies have to execute well to thrive. But Microsoft in the tablet space has to execute even better just to survive.
Apple must be scrambling to retune its iPad Mini announcement for later this month, but what Microsoft has to be feeling (especially with this week’s lackluster Nokia phone intro) is panic.
Reprinted with permission
Third in a series. As many readers have pointed out, the IPO drought of the last decade has many causes beyond just decimalization of stock trading. Sarbanes-Oxley has made it significantly more expensive to be a public company than it used to be. Consolidation in the banking and brokerage industries have resulted in fewer specialists and hardly any true investment bankers surviving. The lure of derivatives trading and other rocket science activities on Wall Street have made IPO underwriting look like a staid and prosaic profession, too. Fortunately, people in positions of influence are finally starting to realize that there is no economic future for this country without new public companies.
One requirement of the JOBS Act, passed last April, was that the SEC look at trading decimalization, and especially tick sizes, to see if there has been an effect on small-cap company liquidity. If the SEC decides there is such a negative effect there’s the possibility that they introduce a new minimum tick for smaller companies of perhaps a nickel (up from a penny) to as much as a dime. I believe this would help the IPO industry, but many people disagree.
"It would be ironic that increasing the cost of trading small caps might actually improve their liquidity", says my old friend Darrell Duffie, who teaches finance at the Stanford Graduate School of Business. "This is possible. If there is not enough of a floor of per-trade profit in providing market making services, then market making services will not be provided. Too much competition for low bid-ask spreads could drive out participation. If there is a floor, high enough, then each trade execution would be more expensive, but it will be easier to get execution, possibly promoting more liquidity on net. Note that some exchanges are subsidising order execution as a way of raising liquidity.
"But overall, I don’t think raising the minimum tick size will boost IPOs much. It might improve net liquidity for them at the margin, or it might not. But the major costs and benefits to going public are likely elsewhere, I am guessing. First, it is not as desirable these days to be a public firm. Think of Sarbanes Oxley. Secondly, the source pool, the VC world, is not as vibrant as it was, for a number of other reasons. Finally, the demand for risky assets is down. Volumes of trade of all types (even big stocks) is way down".
Darrell, who is one of my heroes, is largely correct. Increasing tick sizes alone probably won’t be enough to create an IPO renaissance. Two more things are required: 1) more and braver risk capital, and; 2) real competition in the IPO space.
Crowdfunding as described by the JOBS Act may well bring the required capital to bear, though with some significant limitations, namely the present $1 million limit on capital raised per year and the $2,000 annual limit on crowdfund investments for unaccredited investors. But if even 10 million unaccredited investors became involved in crowdfunding (less than 10 percent of American households) and invested an average of only half the $2,000 limit, that would still be $10 billion per year, which is a heck of a lot of startups.
But the Wall Street pros don’t like crowdfunding, seeing it as too little money to be worth the trouble. That may have a lot to do with the fact that Wall Street is so far running the game and of course objects to any changes that might threaten their leadership. In this case I think Wall Street could use a little competition, and following recent outsourcing trends I’d see that competition coming from China, specifically Hong Kong.
Why not Offshoring IPOs?
Finally we find a perfectly proper case for offshoring, in this case IPOs.
Why do American companies go public in America? Sometimes they don’t. When I started this gig in the 1980s a popular dodge for technology companies was to go public on the London Stock Exchange (LSE) because it was cheaper and easier and some of the capital and ownership rules were different.
I recall an e-mail exchange years ago with Borland International founder Philippe Kahn in which he claimed that Borland, as a public company, was prohibited by the SEC from lying in public statements. I pointed out to Kahn (who remains today among my Facebook friends) that his company traded only in London and therefore the SEC had nothing to do with it.
Like London in the 1980s, Hong Kong today has a robust retail investor market where the United States, with its huge institutions, no longer does. You and I simply don’t matter to Wall Street, did you know that?
IPOs in Hong Kong are perhaps a third as expensive in terms of fees as they are in America. And for that matter IPOs are happening frequently over there and only infrequently here.
Hong Kong banks want to make business loans. There’s an intrinsic value placed on public companies in Hong Kong of about $30 million. That is, purely on the basis of being traded on the Hong Kong Stock exchange, a company is assumed to be worth $30 million more than if it wasn’t traded. And while such a premium may exist here as well, in Hong Kong companies can actually borrow money against that assumed equity. Standard Chartered or HSBC -- banks that operate in the USA, too, and won’t loan a dime to entrepreneurs here -- will lend up to $10 million to recent Hong Kong IPO companies even if they aren’t profitable.
Hong Kong feels for IPOs like Netscape, circa 1995.
So I’d like to see Hong Kong reach out to US companies as the same kind of minor leagues of public trading that London was in the 1980s.
Bootstrapping to build a prototype, crowdfunding to build a company, going public in Hong Kong to build production, then eventually coming back to NASDAQ or the NYSE as a larger cap company, that’s the food chain I envision for the next decade.
And it would work.
Now let’s see if the SEC will allow it.
Photo Credit: Lucia Pitter/Shutterstock
Second in a series. Well it took me more than the one day I predicted to finish this column, which purports to explain that dull feeling so many of us have in our hearts these days when we consider the US economy. Our entrepreneurial zeal is to some extent zapped. For a decade it seemed we needed to jump from bubble to bubble in order just to drive economic growth -- growth that ultimately didn’t last. What happened? Initial Public Offerings (IPOs) went away, that’s what happened.
I wrote several columns on job creation over the last year, columns that explained in great detail how new businesses, young businesses, and small businesses create jobs and big businesses destroy them. Big business grows by economies of scale, economies of scale are gained by increasing efficiency, and increased efficiency in big business always -- always -- means creating more economic output with fewer people.
Too Big to Survive
More economic output is good, but fewer people is bad if you need 100,000 new jobs per month just to provide for normal US population growth. This is the ultimate irony of policies that declare companies too big to fail when in fact they are more properly too big to survive.
Our policy obsession with helping big business no matter which party is in power has been a major factor in our own economic demise because it doesn’t create jobs. Our leaders and would-be leaders are really good at talking about the value of small-and-medium size businesses in America but really terrible about actually doing much to help.
Now here comes the important part: if small businesses, young businesses, new businesses create jobs, then Initial Public Offerings create wealth. Wealth creation is just as important as job creation in our economy but too many experts get it wrong when they think wealth creation and wealth preservation are the same things, because they aren’t.
Wealth creation is Steve Jobs going from being worth nothing at age 21, to $1 million at 22, $10 million at 23, $100 million at 24, to $9 billion at his death 30 years later and in the course of that career creating between Apple and Pixar 50,000 new jobs.
Wealth creation is not some third-generation scion of a wealthy family turning $4.5 billion into $9 billion over the same period of time, because that transformation inevitably involves a net loss of jobs.
The fundamental error of trickle-down (Supply Side) economics is that it is dependent on rich people spending money that they structurally can’t do fast enough to matter, and philosophically won’t do because their role in the food chain is about growth through accumulation, not through new production.
We Need More First-Generation Tycoons
New is the important word here because new jobs are created in inordinate numbers mainly by new (first generation) tycoons, not old ones or second- or third-generation ones. We need new tycoons and we make new tycoons almost exclusively by creating new public companies.
Take Ted Turner as an example. Turner created thousands of new jobs in his career but I’ll bet that he has added zero net new jobs since selling Turner Broadcasting to Time Warner in 1996. That’s not a bad thing, just an inevitable, and the lesson to be learned from it is that we need more young Ted Turners.
Every company that is today too big to fail was once small and literally awash in new jobs. We need more such companies to create more too big to fail enterprises, but without a lot of successful IPOs that isn’t going to happen. It hasn’t happened since the late 1990s and that’s what has sapped the mojo from our economy.
This lack of IPOs is what has also turned Venture Capital from an economic miracle into an embarrasment. Lord knows I’ve written enough about what’s wrong with VCs but until writing this column I never would have identified them as victims, but they actually are.
Venture returns are in the toilet over the last decade or more not just because VCs became inbred and lazy (my usual explanation), but also because the game changed on them almost without their knowing and IPOs went away as a result. They couldn’t get their money out of portfolio companies to reinvest and compound it and when they did get their money out it was through mergers or chickenshit acquisitions that didn’t yield the same multiples.
Not only did IPOs go away, the few that did happen more recently haven’t generally been as successful (Facebook, anyone? Zynga?). We just aren’t as good at creating wealth today as we used to be.
How can that be? Since the 1990s we’ve done nothing but reduce regulations to encourage economic growth. We’ve kept interest rates inordinately low for a decade, which should have given entrepreneurs all the capital they’d ever need to build new empires, but it didn’t happen. A lack of IPOs meant new companies were starved for the growth-capital they needed to take their businesses to the next level. Banks forgot how to make money by lending it to people who could actually use it and so they didn’t lend and entrepreneurs suffered from that, too.
The world is awash in money, but those who can actually use it can’t get any.
Harbinger of Death
So what caused the death of IPOs? It starts with stock market decimalization.
Here I have to give credit to David Weild a senior advisor at Grant Thornton Capital Markets, who made a fabulous presentation on exactly this topic a couple weeks ago at a crowd-funding seminar in Atlanta. You’ll find his entire presentation here. The charts included with this column were mainly taken from Weild.
Stock market decimalization came along in 2000 with the idea that it would bring US markets into alignment with global share pricing standards and (here’s the harder to buy one, folks) decimalization would help small investors by marginally reducing the size of broker commissions. With stock-tick intervals set at a penny rather than at a sixteenth of a dollar (an eighth of a dollar prior to 1997), it was argued, commissions could be more accurately calculated.
Perhaps, but at what cost?
Here’s how decimalization was described by Arthur Levitt, the Fed Chairman in 2000 who has since recanted these words: "The theory is straightforward: As prices are quoted in smaller and smaller increments, there are more opportunities and less costs for dealers and investors to improve the bid or offer on a security. As more competitive bidding ensues, naturally the spread becomes smaller. And this means better, more efficient prices for investors".
Decimalization made High Frequency (automated) Trading possible -- a business tailor-made for trading large capital companies at the expense of small caps and IPOs. Add to this the rise of index and Exchange Traded Funds and all the action was soon in large cap stocks. Market makers were no longer supporting small caps by being a willing buyer to every seller. Big IPOs like General Motors flourished while little Silicon Valley IPOs dramatically declined.
Nothing but Devastation
There are 40 percent fewer US public companies now than in 1997 (55 percent fewer by share of GDP) and twice as many companies are being delisted each year as newly listed. Computers are trading big cap shares like crazy, extracting profits from nothing while smaller companies have sharply reduced access to growth capital, forcing them at best into hasty mergers.
Yes, commissions are smaller with decimalization but it turns out that inside that extra $0.0525 of the old one-sixteenth stock tick lay enough profit to make it worthwhile trading broadly those smaller shares.
Decimalization pulled liquidity out of the market, especially for small-cap companies, hurting them in the process. Markets and market makers consolidated, which also proved bad for small caps and their IPOs. Wall Street consolidation was good for big banks but bad for everyone else.
Now why doesn’t that surprise me?
The dot-com bubble was a bubble, we all sort of knew, so it would have burst inevitably, but decimalization made it so bad that whole markets died, leaving us with our present situation. Here’s a chart suggesting we’d be 18.8 million jobs ahead if decimalization hadn’t happened. That’s 14.8 million jobs more than we have today and the difference between economic stagnation and boom.
But we can’t just go back to the old ways, can we?
That’s what my next column will be about.
Photo Credit: lev radin/Shutterstock
First in a series. A couple of years ago, in an obvious moment of poor judgement, the Kauffman Foundation placed my personal rag on its list of the top 50 economics blogs in America. So from time to time I feel compelled to write about economic issues and the US Labor Day holiday provides a good excuse for doing so now. In a sense you could say I inherited this gig because my parents began their careers in the 1940s working for the US Bureau of Labor Statistics. This first of two columns looks at employment numbers in the current recovery while the second will try to explain why the economy has been so resistant to recovery and what can be done about it.
You’ll see many news stories in the next few days based on a study from the National Employment Law Project detailing how many and what kinds of jobs were lost in the Great Recession and what kinds have come back in the current recovery. Cutting to the chase we lost eight million jobs, have recovered four million of those, but, here’s the problem, the recovered jobs on average pay a lot less than did the jobs that were lost, which is why the US middle class is still hurting.
Jobs Lost Not Regained
According to the study, mid-wage jobs such as construction trades, manufacturing and office employees accounted for 60 percent of the employment drop during the recession but made up just 22 percent of the recovery through March 2012. So-called low-wage jobs like retail and food service workers made up 21 percent of the losses but 58 percent of the subsequent growth.
So while we may have regained 50 percent of the lost positions, we’ve regained significantly less than 50 percent of the lost personal income. With less income we have less to spend and spending is what expands economies.
Even more damning, if you look at the chart below that I’ve reproduced is that absolute numbers of both high-wage and low-wage job holders continued to grow through the decade following 9/11 that included two recessions, while numbers of mid-income earners dropped throughout with the exception of one positive blip in 2007.
No wonder the middle class has been decimated.
Previous recessions, we’re told, had more symmetric recoveries. Most of the jobs that were lost were eventually recovered and then some. So what makes this recession different from all the previous ones?
One solid argument might be that in practical terms the Great Recession isn’t really over, so the recovery isn’t either. If we just wait awhile things might get better.
In the current political debate over jobs, for example, there are those who argue staying the course (the Obama campaign) and those who argue for significant, if undefined, changes of course (the Romney campaign). Interestingly both campaigns feel a sense of efficacy -- that full recovery can come.
Of course it can, but will it?
Look at Japan
Our poster child for the current US economy is Japan, which has managed not to fully recover from the recession following its bubble economy of the late 1980s -- 25 years of economic stagnation made tolerable through deficit spending by the state. So it is very possible for the United States to not emerge from the current muck for decades, which is one of the reasons why so much capital sits uninvested on the sidelines earning one percent or less.
Why invest if growth is unlikely?
The jobs added in the current recovery are those that require very little capital. Expanding the third shift at McDonald’s costs a lot less than building a new semiconductor fab.
There are those who argue that many of the higher paying jobs that have not been regained are gone forever for structural reasons -- technology improvements or changes in business culture or the global economy having made them no longer useful. That’s where all the secretaries have gone, we’re told: they were extravagant relics during the Clinton era only to be killed by the Bush economy and unnecessary in the Obama era. That’s how American manufacturing went to China, we’re told.
The unprecedented housing crash and excruciatingly slow recovery of the construction industry explains why there are only half as many construction jobs as there used to be, we’re told, but this too shall pass.
Or will it?
No Entrepreneurial Zeal
I’d argue that what we’ve mainly lost is some aspect of the entrepreneurial zeal of the 1990s. There’s something different about starting a business now compared to then. It’s still exciting, technological and market changes have in many ways made it even cheaper and easier to start new companies and bring innovative products to market, but we just aren’t getting as much wealth-building for our bucks.
Am I alone in feeling this way?
How could a time with higher tax rates be what we hearken back to as a golden age of entrepreneurism? Some might argue it wasn’t that at all: what Cringely longs for is just another bubble, in this case the dot-com bubble.
Yes, the dot-com bubble was fun, just as the Japanese enjoyed buying-up half of the world’s golf courses in the 1980s, but that’s not what I am talking about here.
I have a fear that true recovery has been so resistant because we’ve somehow wounded our economy making it resistant to recovery. And if we don’t find a way to heal those wounds true recovery will never happen.
What I believe to be the surprising sources of this national pain and possible ways to heal it are the topics of my next column.
Reprinted with permission
Photo Credit: ARENA Creative/Shutterstock
Windows 8 is just over a month from hitting the market and my sense is that this initial release, at least, will be at best controversial and at worst a failure. Microsoft is simply trying to change too many things at once.
What we have here is the Microsoft Bob effect, where change runs amuck simply because it can, compounded in this case by a sense of panic in Redmond, Wash. Microsoft so desperately needs Windows 8 to be a huge success that they’ve fiddled it into a likely failure.
What About Bob?
Microsoft Bob, if you don’t remember its fleeting passage in 1995, was a so-called social interface for Windows intended to be used by novices. The idea was that Bob should be so intuitive as to require no instruction at all. And it succeeded in that, I suppose, though was appallingly slow. I was at the Bob introduction and remember Bill Gates requiring 17 mouse clicks (I counted) just to open a file during his demo. I knew then that Bob was doomed and said so in print.
Just like every other writer who mentions Bob, I’ll take the low road here and recall that the Microsoft product manager for Bob was Melinda French. Who was going to tell the future Mrs. Gates that 17 clicks were too many?
But back then Microsoft, not Apple, was the $600 billion gorilla and could afford such indulgences. The company's market dominance has allowed it to survive any number of bad OS releases. Remember Windows ME? Remember Windows Vista? But those days are past, desktops are in decline and Microsoft doesn’t control the emerging mobile platforms. So the company tries very hard to use this new Windows release to help gain the upper hand in mobile.
It won’t work.
It won’t work because you can’t takeover mobile by hobbling the desktop. By adopting a common code base for both desktops and mobile all Microsoft is doing is compromising both. This is not good but I’m fairly confident it will also be shortly reversed.
Bobbing for Trouble
Using Windows 8 preview the first time, three words came out of my mouth were “How do I?” and then frustration. Have you tried it? It is not intuitive. Power users will persist and figure it out, but Mom and most everyone else will not be happy.
Microsoft took away the Start button and forces you to boot on the Modern side (Metro). There is not much in Metro for traditional PC users.
Removing the start button will confuse everyone and is a step backwards.
And developers, a vital Microsoft constituency, don’t like it either.
This too shall pass: I predict that Windows 8 Service Pack 1 will add the Start button back and allow you to boot directly into your desktop again.
Microsoft is a resilient company and will survive this misstep just as it has so many others. But what’s important here is not the bonehead design moves, but that they’ll be little to no help in the tablet and phone markets where Microsoft so desperately needs to succeed.
Just to be clear, here’s Microsoft’s internal business strategy as I understand it: In order to regain mobile momentum the company deliberately hobbles the desktop side. That way Microsoft can reasonably claim desktop sales as mobile sales and vice versa. What better way to pick up 100+ million “mobile licenses” in the next 12 months?
Only it’s BS. And even if it weren’t BS, even 100 million licenses aren’t enough to be the first or second player in a product space that will shortly have a billion units.
By overreaching with Windows 8 Microsoft not only won’t succeed, it is for the first time in 30+ years in a position to truly fail as a company.
This is Microsoft's best shot and so far appears to be blowing it.
Second in a series. Three quarters of the bits being schlepped over the Internet today are video bits, so video standards are more important than ever. To accommodate this huge load of video data we’ve developed compression technologies, special protocols like the Real Time Streaming Protocol (RTSP), we’ve pushed data to the edge of the network with Content Distribution Networks (originally Akamai but now many others).
All these Internet video technologies are in transition, too, with H.264 and HTML5 video in the ascendence while stalwarts like RealVideo and even Flash Video appear to be in decline. The latter is most significant because Adobe’s Flash has been -- thanks to YouTube -- the most ubiquitous video standard. Flash video was everywhere. But with Flash apparently leaving the ever-growing mobile space, will we ever see another truly ubiquitous web video standard? We already have and it is called ClipStream G2 JavaScript video.
JavaScript is everywhere on the World Wide Web. If you have a browser you have JavaScript. Have an iPhone without Flash? You still have JavaScript. Have a smartphone without Java? You still have JavaScript. Even HTML5, the supposed future of Internet video, isn’t available yet on all platforms, but JavaScript is.
The Web couldn’t function without JavaScript. So if you really need to deploy something everywhere on the web, doing it in JavaScript is a great idea. But JavaScript video is difficult since the scripting language was never developed with video in mind. But as processors get faster and devices have more memory, the idea of doing video in JavaScript became more feasible even though it feels to me a bit like drawing the screen in crayon.
Just 17 Years Old
This new patented technology, only 17 years in the making, was just released in beta yesterday morning. Here’s a sample video featuring someone you may know. It’s glitchy, but think of what an achievement this is. And think how much better it will play a month or a year from now.
ClipStream G2 comes from a Canadian company, Destiny Media Technologies, which has been around since 1991. They literally invented streaming audio, and launched Internet radio before Windows Media, Quicktime or Real Networks even existed. They eventually moved into Internet video only to be killed when Flash came out for a lot less money and then YouTube for free. More recently they’ve built a business for professional musicians, securely delivering pre-release music for all the record companies to radio stations.
First in this series: "Designing a better electric plane"
Now, with Flash abandoning mobile, Destiny sees an opportunity in video again.
"JavaScript powered video doesn’t sound like a big deal", explains Destiny co-founder Steve Vestergaard, "except Javascript performs like a slug. We went from C and assembly in 1995 to Java in 1999 (maybe 100 times slower) to Javascript now (maybe another ten times slower). We have less horsepower in 2012 than we had in 1995, but we play everywhere. Some of the big guys, including chip manufacturers, see an opportunity to improve our performance and keep the cross-platform aspect. Our seven patents are about how to do streaming video when there is no horsepower at all".
It's so Special
Here’s what makes JavaScript video significant. It not only works on all recent browsers, it requires no streaming servers. Stick the Destiny folder on your web server, embed their code in your web page and that’s it.
Not only is there no special server, there’s also no player since the video is rendered by the browser. There is nothing to download or maintain.
There is no transcoding required and Content Distribution Networks, like Akamai or LimeLight, aren’t needed, either.
JavaScript video is also more bandwidth efficient since it looks like regular old web content and can be buffered for reuse in proxy servers. The company estimates that streams are reused at least 10 times saving 90 percent on bandwidth and infrastructure not to mention $4.3 billion in annual transcoding and CDN costs if widely deployed.
It’s not perfect, but then beta code never is. I think this is a major step, though, toward Internet simplification.
Reprinted with permission
Photo Credit: Anna Omelchenko/Shutterstock
While click fraud and identity theft are probably the most common forms of larceny on the Internet, I just heard of a company that sets a whole new standard of bad, lying to advertisers about, well, everything.
Click fraud is when a website either clicks on its own ads to increase revenue, gets someone else to click on them with no intention of buying or works with botnets to generate millions of illegal clicks. I wrote a few months ago how longtime YouTubers were suffering income drops as Google algorithmically eliminated their botnet clicks. But click fraud requires a third-party ad network to work. What I am writing about here is something completely different.
I have an old friend who works in the private equity world where companies are bought and sold for millions. He was about to do exactly that with a prominent web media site (buying it for low nine figures) when due diligence revealed the amazing news that the company was completely fudging its ad numbers.
It was too good to be true.
This can only happen for sites that sell their own ads but this particular site was just sending invoices to advertisers for amounts that were consistently 10 or more times what the advertisers would have paid on the basis of real -- not fake -- clicks.
It’s click fraud without the clicks, since all the complexity of clicks and bots is eliminated in favor of just sending a bogus bill.
I wonder how common this is? Have you heard of this happening or has it happened where you work or used to work?
There is apparently no standardized ad auditing capability on the Internet so scams of this sort are actually easy to do. And advertisers often lean into it by often preferring not to know precisely how effective are their ads.
Take my money, please.
Now here’s the strangest part: having discovered this blatant fraud my friend walked away from the deal but did nothing to bust the offending web site. His lawyer advised that because he signed a non-disclosure agreement with the crooks my friend might be legally liable if he turned them in to the authorities. It is his intention, though, to bust them just as soon as the NDA expire -- in three years.
Yes, I know the name of the website, but if I tell then again my friend is liable. So my lips, too, are sealed.
Until then I suppose advertisers will be soaked for more millions. That is unless the website owners can find a really stupid buyer.
Reprinted with permission
Photo Credit: Robynrg/Shutterstock
Apple co-founder Steve Wozniak this week warned of the perils of depending too much on cloud storage and the general press reacted like this was: A) news, and; B) evidence of some inherent failure in cloud architecture. In fact it is not news (Woz never claimed it was) and mainly represents something we used to call “common sense”.
However secure you think your cloud storage is, why solely rely on it when keeping an extra backup can cost from very little to nothing at all?
No matter whose cloud you are depending on it will be subject to attack. Bigger targets get more attacks and something as big as DropBox, say, is a mighty big target, while that spare hard drive attached to a PC at your house (or my preference -- the house of a friend) is generally a target too small to even be noticed.
Whether it is bad guys stealing cloud data or generally good guys losing or otherwise screwing-up your cloud data, once it is out of your control that data is effectively gone. To read the news reports about this story it is not just gone but also now someone else’s property, given to them by you under their terms of service that you (and I, I admit) didn’t read.
I wouldn’t worry too much about the loss of ownership, because these cloud vendors are less interested in exposing your holiday office party photos than they are in feeding you ads that appear to be about subjects of interest. And even if you were giving away ownership of your data, which you aren’t, that doesn’t mean you are relinquishing any rights of your own to that dataespecially if you bothered to keep a copy.
So backup and backup again. At Chez Cringely everything is backed-up locally two different ways, also backed-up to cloud storage, and even backed up again to a server in another city (it’s a PogoPlug under a bed at my mother-in-law’s house and she doesn’t even know). Some of these techniques cost me money but not all do and my total outlay for such an extravagant backup strategy is less than $100 per year.
At home we have both Time Machine (actually an Apple Time Capsule -- there’s the investment) as well as my 100 percent free ClearOS (formerly ClarkConnect, a specialized version of Linux for network appliances) Internet gateway running a backup server in addition to firewall, antivirus, proxy server, DNS server, etc., all on an old Intel PC I had lying around. ClearOS is the best bargain I know in data security and is ideal for protecting multi-PC, multi-platform home networks. For home use it costs absolutely nothing, runs fine on old hardware, and easily replaces $100+ in security software from places like Kaspersky, McAffee, and Symantec. And with all that crap removed from your PCs they are faster, too. ClearOS protects the computers and their users by protecting the network.
Steve Wozniak is right -- users are going to eventually be burned if they rely solely on cloud backup. Forget about natural disasters and malware, what happens when these outfits just plain go out of business? Where is your data then? Nobody will know.
Photo Credit: T. L. Furrer/Shutterstock
If you are living in Afghanistan, Bangladesh, Brunei, Bhutan, Cambodia, East Timor, India, Indonesia, Iran, Laos, Malaysia, Maldives, Mauritius, Mongolia, Myanmar, Nepal, Pakistan, Papua New Guinea, Singapore, Sri Lanka, Thailand, or Vietnam and want to watch the London Olympics today I’m told your only choice is YouTube. Ten events are available at any time through the International Olympic Committee (IOC) YouTube channel.
Of course 60 live channels are available in the USA through youtube.com/nbcolympics, but I think the international story is more compelling by far because it brings live competition to places where it was never available before.
YouTube seems to have really thrown itself into this Olympics thing, raising its live video game in the process and many of the advances they are rolling-out will be available broadly on the service going forward, not just for the Olympics.
On Friday, I visited YouTube in San Bruno to learn all this. My son Channing, who is 10 and addicted to YouTube fishing videos, was astounded to learn YouTube had a physical existence at all. How quickly our kids have embraced the cloud.
It will be interesting to see what breaks. Jason Gaedtke, YouTube’s director of software engineering, says they do not expect a flawless performance.
According to Gaedtke all video processing is being done in the Google Cloud and no custom hardware (or even additional hardware) is required.
With NBC, YouTube offers 60 simultaneous live events and hundreds of recorded ones, all transcoded into seven different video streams for various devices starting at 1080p and going all the way down to feature phones. There’s a new DVR interface, too, that allows viewers to pause live action or even start at the beginning of an event already in progress. On completion all live events go into the library and remain accessible.
NBC editors are even adding metadata hints to the video thumbnails so lazy viewers can go straight to the most exciting moments -- scores, finishes, etc.
It’s free (there are commercials), happening in real time (not tape delayed for primetime), and covers even the most obscure events. What’s not to like? Well, I had to identify myself as a Comcast subscriber, so I wonder what off-the-air TV viewers will get? Please let us all know.
To me this feels like Internet video really coming into its own, providing a live service that simply couldn’t be done any other way. I can imagine an Olympics or two from now when the Internet may be the dominant (possible only) way to watch the games.
My Real World Experience
July 29, 3:30 pm: We’re trying to watch the Olympics on YouTube and it, in a word, sucks. Maybe this is Comcast, though Netflix and Hulu are running just fine. More likely it is YouTube having capacity problems. Of course the commercials seem to load okay. I’ve sent a message to YouTube and will update this post as I learn more. Hopefully they’ll be able to grab a bigger chunk of cloud and fix the problems.
July 29, 7 pm: It’s the speed of the PC. A dual-core 2-GHz iMac is jerky while a 2 GHz four-core I7 Mac Mini runs fine. A 2.4 GHz AMD four-core PC running Windows 7 Professional runs fine, too. But I can’t watch the Olympics on a 2 GHz iMac, a 2 GHz Mac Mini, or my mid-2010 MacBook Pro (also 2-GHz). All three computers have two cores and are at their max RAM. Yes, I can slow down the connection, but anything above 360p clearly has problems (240p is best) and this on a 25 Mbps Internet connection. Understand that in each case I’m starting with the resolution setting on “auto”, so YouTube clearly expects my machines to run faster than they actually do.
Did YouTube test with any real world computers regularly used by small boys?
Depending on who you are talking to there were several very different reasons why the Internet was created, whether it was military command and control (Curtis LeMay told me that), to create a new communication and commerce infrastructure (Al Gore), or simply to advance the science of digital communications (lots of people). But Bob Taylor says the Internet was created to money. And since Bob Taylor was, more than anyone, the guy who caused the Internet to be created, well, I’ll believe him.
Taylor, probably best known for building and managing the Computer Systems Laboratory at XEROX PARC from which emerged advances including Ethernet, laser printing, and SmallTalk, was before that the DARPA program manager who commissioned the ARPANet, predecessor to the Internet. Taylor was followed in that DARPA position by Larry Roberts, Bob Kahn, and Vint Cerf -- all huge names in Internet lore -- but someone had to pull the trigger and that someone was Bob Taylor, who was tired of buying mainframes for universities.
Brief History
This was all covered in my PBS series Nerds 2.01: A Brief History of the Internet, by the way, which appears to be illegally available on YouTube if you bother to look a bit.
As DARPA’s point man for digital technology, Taylor supported research at many universities, all of which asked for expensive mainframe computers as part of the deal. With money running short one budget cycle Taylor wondered why universities couldn’t share computing resources? And so the ARPANet was born as a digital network to support remote login. And that was it -- no command and control, no eCommerce, no advancing science, just sharing expensive resources.
The people who built the ARPANet, including the boys and girls of BB&N in Boston and Len Kleinrock at UCLA, loved the experience and turned it into a great technical adventure. But the people who mainly used the ARPANet, which is to say all those universities that didn’t get shiny new mainframes, hated it for exactly that reason.
In fact I’d hazard a guess that thwarting the remote login intent of DARPA may have been the inspiration for many of the non-rlogin uses we have for the Internet today.
ALASA
But this column is not about the ARPANet, it is about DARPA itself, because I have a bone to pick with those people, who could learn a thing or two still from Bob Taylor.
Last fall DARPA issued an RFP for a program called Airborne Launch Assist Space Access (ALASA), which was literally launching small satellites into orbit from aircraft. I have a keen interest in space and have been quietly working on a Moon shot of my own since 2007 -- a project that features airborne launches. For adult supervision I’ve been working all that time with Tomas Svitek, a well-known and perfectly legitimate rocket scientist who tolerates my wackiness.
Since DARPA seemed to be aiming right for what we considered to be our technical sweet spot (airborne satellite launches) Svitek and I decided to bid for one of the three ALASA Phase One contracts to be awarded.
We didn’t win the contract. And this column is about why we didn’t win it, which we just learned, months after the fact, in a DARPA briefing.
We didn’t get one of the three contracts because, silly us, our proposal would have actually accomplished the stated objective of the program, which was launching a 100 lb satellite into Low Earth Orbit from an aircraft on 24 hours notice from a launch base anywhere on Earth (location to be specified by DARPA when the clock starts ticking) for a launch cost of under $1 million.
Here is the tactical scenario as explained to all the bidders by DARPA. An incident happens somewhere in the world potentially requiring a US military response. Viewers of The West Wing can imagine a spy satellite operated by the National Reconnaissance Office moving into position over the hotspot so people in the Situation Room can watch what’s happening. This satellite move may or may not happen in reality, but even if it does happen the intel isn’t shared with troops on the ground in any usable form. That’s what ALASA is supposed to be all about -- providing satellite surveillance to commanders on the ground. At present such a launch costs $6 million and takes weeks to prepare, so $1 million on 24 hours notice would be quite an advance.
DARPA projected the Department of Defense would need as many as 30 such launches per year.
The Phase One winners were Boeing with a proposal to launch from an F-15, Lockheed-Martin with a proposal to launch from an F-22, and Virgin Galactic with a proposal to launch from White Knight 2.
None of these solutions will work. The F-15 and F-22 are both constrained by the size of payload that can be carried. The F-15 is too low to the ground (I’ve measured this myself during a scouting mission to Wright-Patterson Air Force Base) and the F-22, being a stealth fighter, carries its ordinance internally in bomb bays. Both are limited to carrying under 5,000 lbs.
White Knight 2 could probably carry enough weight, but couldn’t get it halfway around the world on 24 hours notice.
Our solution, based on five years of work including scrounging in Ukrainian corn fields, was entirely practical. The only aircraft capable of fulfilling this mission with anything less than heroic measures is the reconnaissance version of the MiG-25, which is 50 percent larger than an F-15 and carries a 5300-liter external fuel tank weighing 10,450 lbs. Using the same perchlorate solid rocket fuel used to launch the Space Shuttle (raw material cost $2 per pound) we could do the job safely and reliably for a launch price easily under $600K. Capable of Mach 2.83 the MiG could meet the 24 hour global deployment deadline, too.
The DARPA Way
So why didn’t we at least get the safety position among the three winners?
That, my friends, comes back to the question why was the Internet invented? The DARPA of today, which by the way trumpets at every opportunity their singular involvement in starting the Internet, has evidently forgotten that the Internet was invented to save money, because DARPA in the case of ALASA doesn’t really want a practical solution. They want heroic measures.
We’re told we were rejected because our proposal used solid fuel rockets. Our solution wasn’t (and this is a direct quote) “the DARPA way”.
Yes, our proposal was practical and, yes, it would probably work, but DARPA wants to push the technical envelope toward higher-impulse liquid-fueled rockets that can be small enough to fit under an F-15 or inside an F-22. White Knight 2, it turns out, won the safety position even though it can’t fulfill the entire mission.
There’s nothing wrong with DARPA wanting to advance the science of space propulsion. But if that was their intent, why didn’t they say so?
It is very doubtful that ALASA will result in any tactical satellites actually being deployed to support commanders in the field. Not even a liquid-fueled rocket under 5,000 lbs can put 100 lbs into orbit. Dilithium crystals are required.
Fortunately by the time DARPA figures this out Tomas and I will have made the entire program unnecessary. You see we have this great new idea.
The theory of outsourcing and offshoring IT as it is practiced in the second decade of the 21st century comes down to combining two fundamental ideas: 1) that specialist firms, whether here or overseas, can provide quality IT services at lower cost by leveraging economies of scale, and; 2) that offshore labor markets can multiply that price advantage through labor arbitrage using cheaper yet just as talented foreign labor to supplant more expensive domestic workers who are in extremely short supply. While this may be true in the odd case, for the most part I believe it is a lie.
This lie is hurting both American workers and the ability of American enterprise to compete in global markets.
My poster child for bad corporate behavior in this sector is again IBM, which is pushing more and more of its services work offshore with the idea that doing so will help IBM’s earnings without necessarily hurting IBM customers. The story being told to support this involves a supposed IT labor shortage in America coupled with the vaunted superiority of foreign IT talent, notably in Asia but also in Eastern Europe and South America.
Strange Labor
India, having invented mathematics in the first place and now granting more computer science and computer engineering degrees each year than does the United States is the new quality center for IT, we’re told.
Only it isn’t, at least not the way Indian IT labor is used by IBM.
I already wrote a column about the experience of former IBM customers Hilton Hotels and ServiceMaster having no trouble finding plenty of IT talent living in the tech hotbed that is Memphis, TN, thus dispelling the domestic IT labor shortage theory.
This column is about the supposed advantages of technical talent from India.
There can be some structural advantages to using Indian labor. By being 12 hours out-of-sync, Indian techies can supposedly fix bugs while their US customers sleep. But this advantage relies on Indian labor moving quickly, which it often does not given the language and cultural issues as well as added layers of management.
India, simply by being such a populous country and having so many technical graduates, does indeed have a wealth of technical talent. What’s not clear, though, is whether this talent is being applied to serve the IT needs of US customers. My belief is that Indian talent is not being used to good effect, at least not at IBM.
Cool Deception
I suspect IBM’s customers are being deceived or at least kept in the dark.
Here is my proof: right now IBM is preparing to launch an internal program with the goal of increasing in 2013 the percentage of university graduates working at its Indian Global Delivery Centers (GDCs) to 50 percent. This means that right now most of IBM’s Indian staffers are not college graduates.
Did you know that? I didn’t. I would be very surprised if IBM customers knew they were being supported mainly by graduates of Indian high schools.
To be fair, I did a search and determined that there actually are a few US job openings at IBM that require only a high school diploma. These include IBM GBS Public Sector Consultant 2012 (Entry-Level), Technical Support Professional (Entry Level) and Software Performance Analyst (Entry Level). But I have yet to meet or even hear of a high school graduate working in one of these positions in the USA.
It’s ironic that in the USA, with its supposed IT labor shortage, we can hire college graduates for jobs that in India are filled by high schoolers.
Yet in India IBM admits that the majority of its GDC workers lack university degrees. They certainly don’t advertise this fact to customers, nor do they hide it I suppose because they don’t have to.
What customer is going to think to ask for Indian resumes? After all this is IBM, right?
Yeah, right.
The most astounding part of this story to me is that one of the challenges IBM says it is facing in this project is to “establish a cultural change program to drive increased acceptance of staffing with graduates”.
So IBM’s Indian Global Delivery Centers are anti-education?
For more information I suggest you ask the IBMer leading this project, Joanne Collins-Smee, general manager, Globally Integrated Delivery Capabilities, Global Business Services at IBM.
Reprinted with permission
Photo Credit: jazzerup/Shutterstock
I see an interesting trend among people emailing me to comment about Marissa Mayer: They see her hiring at Yahoo as some kind of trick by Google. Ms. Mayer is Google to the core, readers say, and she’s going to Yahoo simply as a commando to pick and choose future Google acquisitions.
No, she isn’t.
But I can’t write just a two-paragraph column so I’ll go on to suggest what I think Ms. Mayer could do as CEO of Yahoo, which might even have modest success, though probably not in Internet terms.
That’s an issue here because we speak of Yahoo as though it’s dead when in fact it is very profitable. Newspaper chains would kill to post Yahoo’s numbers. But on the Internet, where up-up-up is the norm, just up is bad, and flat is failure (pay attention Facebook -- what happened to your subscriber growth?). Internet rules are very different than rules for the rest of American business.
Shut down Half the Company
Nevertheless, Marissa Mayer is not facing a Steve Jobs-type opportunity at Yahoo, nor does she present a threat to Google.
Mayer was maxed-out at Google. Yes, they could have taken heroic measures to keep her but they didn’t, just as they didn’t when Tim Armstrong went to AOL. Google doesn’t care about Marissa Mayer and doesn’t care about Yahoo, either. There are no parts to acquire, since Google already duplicates everything at Yahoo. The best they can hope is to acquire Yahoo customers.
Mayer will try to show Google how wrong they were to let her go and she’ll do that through product, which is her thing. So she’ll do what Carol Bartz would have done as CEO had Bartz come from Google and realized how little time she actually had to affect change at Yahoo. If she’s as smart as she’s supposed to be, Mayer will shut down all the bits of Yahoo that don’t make money, which is half of the company.
Shutting things down has been difficult to do at Yahoo because the company is very bureaucratic (thank you Terry Semel) and byzantine in its structure to the point where it isn’t clear at the top what a lot of those moving parts below actually do. Which parts to shut down? Which parts are dependent on other parts and dare we risk shutting down the wrong parts? None of this is insurmountable except at Yahoo anyone who is still there is pretty determined to stay, so like Sergeant Schultz, they know nothing!
Come to Jesus, Yahoo
But this is come-to-Jesus time for Yahoo and Mayer has a small window of opportunity to do audacious things, like shut down all the parts of the business that aren’t contributing to profit. Boom! Shut it down. This will goose earnings terrifically.
Remember Yahoo is in the process of selling its Asian holdings. That will go forward liberating a huge wad of cash, most of which will go to current shareholders. I’m not sure that’s the best use of this windfall, but the political reality is that it will happen. However, if I had been Mayer negotiating my deal with Yahoo I’d demand that at least some of that cash stay in the corporate coffers. It would be pretty easy to argue that a slimmed-down Yahoo could show a better return on that money than shareholders will see in alternate investments. I hope she cut that sort of deal.
But even if she didn’t, Yahoo is still a rich company. And with that money Mayer will likely go on an acquisition binge to create an app suite like Google’s, which Yahoo presently does not have. The company’s strong position in mail is the key here. Those 310 million customers are the very heart of Yahoo and they have stuck with the company because they like the mail product. It’s a no-brainer, then, to clone Google Apps for these people. And the way to clone Google Apps is through acquisition and integration. There are plenty of companies to buy and plenty of new DNA to be acquired with those companies, which is the other reason for buying rather than building.
Yahoo needs not so much to be reorganized as to be reborn: radical change is required and that requires new blood.
Plan is in the Mail
There are limits to what Yahoo can do to emulate Google. Mayer won’t do a mobile OS, for example, because she can’t win at that game (if Microsoft can’t win, Yahoo can’t win) but she’ll be sure to extend her apps across all mobile platforms.
Then there’s that nagging question about whether Yahoo is a media company, a content company, an Internet company, what? The obvious answer is that Yahoo will play to its strengths and invest in growth markets, which include mobile and video. This may sound like a dodge but it isn’t. The question is unfair for one thing. Yahoo doesn’t have to be entirely one thing or another as long as all the parts are headed in the same direction.
Questions like this one, in my view, always come down to figuring how Ronald Reagan would have spun it.
Yahoo, tear down this wall!
The key to Yahoo’s content strategy is CEO-until-yesterday Ross Levinsohn, who should be encouraged to stay with the company. The way to keep Levinsohn is to take some of that cash and very publicly give him a $1+ billion fund to acquire video content.
That kind of money can change the game in Hollywood.
Yahoo 2.0, if done correctly, will ultimately be a third the size it is today in terms of head count, be even more profitable, and might well survive.
Will Marissa Mayer actually do these things? I don’t know.
Reprinted with permission
If Aaron Sorkin (The Social Network, The West Wing, Newsroom) wrote the story of Yahoo and he got to Marissa Mayer’s surprise entrance as Yahoo’s latest CEO, here’s how he would probably play it: the brilliant, tough, beautiful, charismatic engineer defies her Google glass ceiling and, through sheer vision and clever example, saves the pioneering Internet company. That’s how Sorkin would play it because he likes an underdog, loves smart, well-spoken people and revels in beautiful if slightly flawed characters and happy endings. But in this case Aaron Sorkin would be playing it wrong.
To be clear, were I in the position of Yahoo’s board I would probably have hired Marissa Mayer, too. On paper she’s nearly perfect (only CEO experience is missing) and the drama of her going from not even being on the list of candidates discussed to getting the big job is wonderful theater that will play well on Wall Street for weeks, maybe months. For once the Yahoo board seems to have been on the ball.
Still it probably won’t work.
Understand I have nothing against Ms. Mayer. I’ve met her only once, and I’m sure she doesn’t remember me. I don’t know her. But I know about her and, more importantly, I do know Yahoo.
I can see why she’d want the job. It’s an epic challenge for someone who didn’t really have anywhere else to go at Google yet feels destined for greater things. And I’m not here to say Marissa Mayer won’t achieve greater things in her career.
Just not at Yahoo.
Here we have a company in crisis. No, it’s worse than that: If companies had asses, Yahoo’s ass would be on fire. I knew that when I saw they had pulled founder David Filo into the position of spokesman. Filo, who is a nice guy and as nerdy as they come, is typically comatose in interviews, so whatever is happening in the executive offices at Yahoo has shot him full of fear-induced adrenaline for what would appear to be the first time in years, maybe ever.
This is it, I’m sure they’ve decided -- Yahoo’s last chance to fix itself before the for sale sign goes up.
Yet Yahoo is still Yahoo and I’d put money on this old dog not learning enough new tricks to make a difference. It’s not that Yahoo has changed, by the way, but that it hasn’t changed. It’s the world that changed around Yahoo. What worked so well in the Clinton years today barely works at all.
In our rock star CEO-obsessed business culture we believe that all it takes is the right guy or gal at the top with the right mojo to save the day. And on the short list of available charismatic leaders, Marissa Mayer looks pretty darned good.
That’s from the outside looking in. Inside Google Ms. Mayer had a reputation for being mean and not all that effective. She wasn’t in charge of that much and as time went on she was marginalized. She was employee #20 and the first woman female engineer but that didn’t make her a great leader or a visionary, just an early hire.
Now maybe Ms. Mayer was held back and never given a chance to shine. Or maybe people above her saw her deficiencies and kept her at a level where she could do no harm or could remain effective. Only time will tell.
This is one instance where I’d be happy to be wrong. I have no desire to see Ms. Mayer fail. I like underdogs and happy endings, too.
If Aaron Sorkin were to get a happy ending for his Yahoo story he’d need a few more elements that probably aren’t there. He’d need a subordinate character willing to risk all and act as a moral center for the company, demanding the new CEO do the right things. He’d need his Marissa Mayer character to listen and learn. But most importantly for a Sorkin story he’d need a caricature antagonist, a powerful bully of a competitor unable to get out of its own way.
Yahoo might yet find the first two of these required components, but I know it doesn’t have the third.
Reprinted with permission
About 10 weeks ago, I wrote a six-part series of columns on troubles at IBM that was read by more than three million people. Months later I’m still getting ripples of response to those columns, which I followed with a couple updates. There is a very high level of pain in these responses that tells me I should do a better job of explaining the dynamics of the underlying issues not only for IBM but for IT in general in the USA. It comes down to class warfare.
Warfare, to be clear, isn’t genocide. There are IT people who would have me believe that they are complete victims, powerless against the death squads of corporate America. But that’s not the way it is. There’s plenty of power and plenty of bad will and plenty of ignorance to go around on all sides here. Generally speaking, though, the topic is complex enough that it needs real explaining -- explaining we’re unlikely to get elsewhere in an election year dominated by sound bites.
This is not just about IBM. It is about the culture of large corporations today, not yesterday.
Corporate Cultures
From the large corporation side the issues are cutting costs, raising revenues, increasing productivity, earnings-per-share and ultimately the price of company stock. Nothing else matters. Old corporate slogans and promises implied in employee handbooks from 1990 have no bearing in the present. It would be nice if companies kept their commitments, but they don’t. Corporate needs change over time. And with rare exceptions for truly criminal behavior we probably just have to accept this and move on.
From the perspective of IT professionals there’s a betrayal of trust that stems from the attempted commoditization of their function. What once were people now are resources and like any commodity the underlying idea is that a ton of IT here is exactly comparable to a ton of IT there.
So there’s a tug-of-war between corporate ambitions and personal goals. And to complicate things further this struggle is taking place on a playing field that also involves government, finance, and media -- all entities that deal mainly in stereotypes, generalities, and bull shit.
Wall Street just wants the numbers to work out. Wall Street can win by going long or going short so all they really need is change. This further depersonalizes the struggle that’s going on because the traders who benefit from those stock shifts don’t really care what’s happening or why. To Wall Street there is no such thing as a bad (or good) business, that is until some CEO goes to jail.
Government wants improvement and overall prosperity but their tools for accomplishing this aren’t very precise and the culture of government is both corrupt and stupid. If government wasn’t corrupt and wasn’t stupid, lobbyists would have no impact. Smart government wouldn’t need lobbyists to explain things. Uncorrupt government wouldn’t be open to manipulation by special interests. Worst of all the political game seems to have devolved to solely being one of spin -- turning the events of the day to the disadvantage of opponents without regard to what’s right or real.
And the media tends to just repeat what it is told, tacking ads to the content. There’s a cynicism in the media combined with an inability to give more than 15 minutes of real attention to anything that isn’t a celebrity divorce resulting in little information or useful perspective in the news.
IT Talent
There are huge economic forces at work here -- far bigger than many people or institutions recognize. Right now, for example, there are hundreds of thousands of experienced IT workers in the USA who are unemployed at the exact moment when big corporate America is screaming for relaxed immigration rules to deal with a critical shortage of IT talent.
How can this be? Do we have an IT glut or an IT shortage?
Like any commodity, the answer to this question generally comes down to delivered cost. If you are willing to pay $100 per barrel of oil there’s plenty of the stuff to be purchased. There’s a glut of $100 oil. But if you will only pay $10 per barrel of oil there’s a critical shortage. In fact at $10 America probably has no oil at all.
In America right now there is a glut of $80,000-and-above IT workers and a shortage of $40,000-and-below IT workers.
Remember that $80,000-and-above population comes with a surcharge for benefits that may not equally apply to the $40,000-and-below crowd, especially if those are overseas or in this country temporarily. A good portion of that surcharge relates to costs that increase with age, so older workers are more expensive than younger workers.
It’s illegal to discriminate based on age but not illegal to discriminate based on cost, yet one is a proxy for the other. So this is not just class warfare, it is generational warfare.
Yet government and media are too stupid to understand that.
There are some realities of IT that intervene here, creating problems for both sides. IT is a notoriously inefficient profession, for one. At its lowest levels there’s typically a very poor use of labor that might well be vulnerable to foreign assault. Exporting a crappy help desk to Pakistan might not be a good idea, but if the current standard is bad enough then Pakistan is unlikely to be worse. In short there are a lot of incompetent IT people in every country.
Toward the top end of IT the value of individual contributors becomes extreme. There are many IT organizations where certain critical functions are dependent on a single worker. These are complex or arcane tasks being done by unique individuals. You know the type. Every organization needs more of them and it is easy to justify looking wherever, even overseas, to find more. It’s at this level where the commodity argument breaks down.
Boiled Like Frogs
What we see at IBM and most of its competitors is a sales-oriented culture (get the deal -- and the commission -- at any cost) that sees technical talent as fungible, yet sometimes it isn’t fungible at all. There are many instances where IT resources can’t be replaced ton-for-ton because in the whole world there is less than a ton of what’s needed.
From the big corporate perspective we discard local resources and replace them with remote or imported resources. This might work if there were no cultural, language, or experience differences, but there are. There are differences based on familiarity with the job at hand. All these are ignored by CEOs who are operating at a level of abstraction bordering on delusion. And nobody below these CEOs sees any margin in telling them the truth.
Against this we have a cadre of IT workers who have been slowly boiled like frogs put into a cold pot. By the time they realize what’s happened these people are cooked. They are not just resentful but in many cases resentful and useless, having been so damaged by their work experience. They just want things to go back the way there were but this will never happen.
Never.
So we have a standoff. Corporate America has, for the most part, chosen a poor path when it comes to IT labor issues, but CEOs aren’t into soul-searching and nobody can turn back the clock. Labor, in turn, longs for a fantasy of their own -- the good old days.
The only answer that makes any sense is innovation -- a word that neither side uses properly, ever.
The only way out of this mess is to innovate ourselves into a better future.
But that’s for some future column. This one’s long enough.
Reprinted with permission.
Photo Credit: rudall30/Shutterstock
A reader pointed out to me this past week that the personal computer is well over 30 years old -- a number that has real consequence if you are familiar with my work. He remembered I predicted in 1992 that PCs as we knew them would be dead by now.
I was obviously a little off in my timing. But only a little off. PCs are still doomed and their end will come quicker than you think.
Not Dead Yet
Here’s what I wrote in my book Accidental Empires in 1992:
It takes society thirty years, more or less, to absorb a new information technology into daily life. It took about that long to turn movable type into books in the fifteenth century. Telephones were invented in the 1870s but did not change our lives until the 1900s. Motion pictures were born in the 1890s but became an important industry in the 1920s. Television, invented in the mid-1920’s, took until the mid-1950s to bind us to our sofas.
We can date the birth of the personal computer somewhere between the invention of the microprocessor in 1971 and the introduction of the Altair hobbyist computer in 1975. Either date puts us today (1992, remember) about halfway down the road to personal computers’ being a part of most people’s everyday lives, which should be consoling to those who can’t understand what all the hullabaloo is about PCs. Don’t worry; you’ll understand it in a few years, by which time they’ll no longer be called PCs.
By the time that understanding is reached, and personal computers have wormed into all our lives to an extent far greater than they are today, the whole concept of personal computing will probably have changed. That’s the way it is with information technologies. It takes us quite a while to decide what to do with them.
Radio was invented with the original idea that it would replace telephones and give us wireless communication. That implies two-way communication, yet how many of us own radio transmitters? In fact, the popularization of radio came as a broadcast medium, with powerful transmitters sending the same message -- entertainment -- to thousands or millions of inexpensive radio receivers. Television was the same way, envisioned at first as a two-way visual communication medium. Early phonographs could record as well as play and were supposed to make recordings that would be sent through the mail, replacing written letters. The magnetic tape cassette was invented by Phillips for dictation machines, but we use it to hear music on Sony Walkmans. Telephones went the other direction, since Alexander Graham Bell first envisioned his invention being used to pipe music to remote groups of people.
The point is that all these technologies found their greatest success being used in ways other than were originally expected. That’s what will happen with personal computers too. Fifteen years from now, we won’t be able to function without some sort of machine with a microprocessor and memory inside. Though we probably won’t call it a personal computer, that’s what it will be.
Better Value Proposition
Though I had no inkling of it back in 1992, what’s rapidly replacing the PC in our culture is the smartphone. Today the PC industry and the smartphone industry are neck-and-neck in terms of size at around $250 billion each. But which one is growing faster? For that matter, which one is growing at all?
We still rely on devices with processors and memory, they are just different devices. The mobility trend has been clear for years with notebooks today demanding larger market share than desktops. And one thing significant about notebooks is they required of us our first compromise in terms of screen size. I write today mainly on a 13-inch notebook that replaced a 21-inch desktop, yet I don’t miss the desktop. I don’t miss it because the total value proposition is so much better with the notebook.
I wouldn’t mind going back to that bigger screen, but not if it meant scrapping my new-found mobility.
Now extend this trend another direction and you have the ascendant smart phone -- literally a PC in your hand and growing ever more powerful thanks to Moore’s Law.
What’s still missing are clearcut options for better I/O -- better keyboards and screens or their alternatives -- but I think those are very close. I suspect we’ll shortly have new wireless docking options, for example. For $150 today you can buy a big LCD display, keyboard and mouse if you know where to shop. Add wireless docking equivalent to the hands-free Bluetooth device in your car and you are there.
I’d be willing to leave $150 sitting on my desk if doing so allowed me to have my computing and schlepp it, too.
Or maybe we’ll go with voice control and retinal scan displays. No wonder Google is putting so much effort into those glasses.
The hardware device is becoming less important, too. Not that it’s the thin clients Larry Ellison told us we all needed back in 1998. What matters is the data and keeping it safe, but the cloud is already handling that chore for many of us, making the hardware more or less disposable.
What’s keeping us using desktops and even notebook, then, are corporate buying policies, hardware replacement cycles and inertia.
Next-to-Last PC
How long before the PC as we knew it is dead? About five years I reckon, or 1.5 PC hardware replacement cycles.
Nearly all of us are on our next-to-last PC.
Microsoft knows this on some level. Their reptilian corporate brain is beginning to comprehend what could be the end. That’s why the company is becoming increasingly desperate for ways to maintain its central role in our digital lives. We see the first bet-the-company aspects of that in Redmond’s recent decision to run the Windows 8 kernel all the way down to ARM-powered phones and tablets even though it requires shedding features to do so.
I doubt that will be enough.
Reprinted with permission
Photo Credit: wrangler/Shutterstock
My little film about Steve Jobs has finally made it to iTunes (YouTube as well!) as a $3.99 rental, but you wouldn’t know it. Deeming the film “too controversial,” Apple has it on the site but they aren’t promoting it and won’t. The topic is “too sensitive” you see. It isn’t even listed in the iTunes new releases. You have to search for it. But it’s there.
Maybe I’m not even supposed to tell you.
Of course, there is nothing controversial or insensitive about this movie, which everyone including the critics seems to like. It’s a different look at an interesting guy and some people seem to take away a lot from it. You be the judge.
I think this says a lot more about Apple than it does about the movie. This is the most valuable company on earth and when you get that big all news you don’t absolutely control is assumed to be bad news. And at Apple the big unanswered question is this: What if Laurene Jobs (Mrs. Steve) doesn’t like this movie?
She’s a big shareholder, there’s this whole mystic Cult of Steve thing going on, this could be really, really bad.
Only it isn’t bad at all.
The question is unanswered only because to my knowledge it has gone unasked.
Laurene Jobs has her own Blu-Ray copy of Steve Jobs — The Lost Interview and has had it since last year. I sent it to her myself at the request of her best friend.
So there: question answered.
This feels way too much like the 8th grade.
Reprinted with permission.
In a few weeks I’ll launch a YouTube channel where you’ll be able to see lots of shows readers have asked about, including Startup America and even that lost second season of NerdTV. YouTube, as the largest video streaming service anywhere, is the absolute best place for me. But YouTube isn’t the future of TV.
I know this because TV is a business and this channel I’m launching is a business and I’ve spent the last several weeks talking to investors and running the numbers every which way. I’ve spent many hours with my friend Bob Peck looking at the economics of YouTube and my unequivocal conclusion is that while YouTube is great, it isn’t TV.
YouTube is the Casino
Television is a mass medium, and it needs to be because professional content is expensive so you have to amortize it across a lot of viewers. YouTube certainly has moments when it functions as a mass medium, when millions or even tens of millions of viewers look at the same clip in a short period of time. But for most people YouTube is a smaller, more intimate viewing experience with comparably smaller production budgets.
Many YouTube videos have hundreds, not millions, of viewers.
And that works to a point for YouTube and parent Google because their cost of acquiring content has traditionally been nothing. YouTube is the casino, not the gambler in this model.
But as a casino, YouTube would really like to attract gamblers willing to place larger bets. As far as I can tell, though, such gamblers aren’t coming. Google’s current professional video initiative, which I have written about before, is a $100 million effort to attract real producers or real television, which it has done to a certain extent. But once that $100 million is spent, how many of those producers will stick around? Very few.
The problem is that nobody is willing to make the big bets required to move mainstream media to the Internet. Glenn Beck and Louis CK can do it, but their followers will go anywhere for their fix. Those two are outliers -- outliers who would not be successful on the Net had they not previously been successful on mainstream TV. Traditional TV and movies can’t or won’t follow their lead.
Even a modest cable series on, say, the Oprah Winfrey Network, is budgeted at about $190,000 per finished hour, so a 22-part series is about a $4 million commitment for the network.
If you do the math, turn the algorithms inside out, and parse the ratings data, the sweet spot for professional production costs (not cat videos) on YouTube is around $8,000/hour — just over four percent of an Oprah.
You Can't Make a Spectacle
This doesn’t mean there won’t be good original shows on YouTube. You can make a great show for $8,000. But you can’t make a spectacle.
You can repurpose content you’ve already paid for, which is what happens on Hulu, for example, and to some extent on YouTube. But the nature of stardom has to be different — smaller — on YouTube than it is on more traditional media.
This is a large part of the reason why television has lately been in resurgence. Once viewed as being inevitably replaced by Internet viewing, both TV networks and audiences are concluding that for certain types of programming the Internet isn’t an acceptable substitute for the boob tube.
As a producer I can accept this but doing so puts very real bounds on my ambitions working solely within the YouTube ecosystem. As a video application platform, YouTube simply doesn’t scale the way Hollywood would like it to scale.
That is not to say YouTube and YouTube producers can’t or won’t be successful. I certainly expect to be successful. But in order to make that happen we have to embrace hybrid business models where we make our profits in different ways.
The kind of shows I like to do can be made for YouTube-scale money, they just can’t be made at a profit.
This isn’t such a foreign idea, by the way. PBS, where I worked for many years, tries to pay as little as it can for programming -- hopefully nothing at all. And in a meeting I had a couple years ago at The Smithsonian Channel, they offered me $115,000 per hour for a show (their going rate) while simultaneously requiring that the production budget be $230,000 per hour. If I wanted to be on their channel it was my job to find half of the money.
And producers do. They presell foreign rights, sell product placements (Subway saved Chuck one season), or in the case of PBS they get foundation money to make up the difference.
Money Machine for Me
With YouTube offering only one way for producers to make money, little in the way of co-marketing, and almost no network halo effect of being, for example, the show right after Seinfeld, it’s inevitably a smaller and more antiseptic medium.
I spent more than a year working with a major Hollywood studio trying to figure out how to make money doing shows for YouTube and I’ll tell you that it simply can’t be done, not with Hollywood overhead.
Hollywood can repackage content and maybe make a little margin, but that doesn’t buy Porsches.
I wonder if YouTube has figured this out?
For the sake of its own long term success, YouTube needs to either consistently deliver larger audiences or pay content producers more for their work.
None of this bothers me, though, because I finally have it figured out. I’ve come up with a lateral business solution that will turn YouTube, at least for me, into a money machine.
Photo Credit: holbox/Shutterstock
Last week Microsoft kinda-sorta announced its new Surface tablet computer. This week will come a Google-branded tablet. Both are pitted against the mighty iPad. Both companies see opportunity because of what they perceive as a Steve Jobs blind spot. And both companies are introducing tablets under their own brands because they can’t their get OEM’s to do tablets correctly.
For all the speculation about why Microsoft or Google would risk offending hardware OEMs by introducing name branded tablets, the reality is that neither company really had any choice but to make the hardware. In the commodity PC market, no one company is likely to be willing to make the investment necessary to compete with the highly-integrated iPad. Samsung tried, and even then it didn’t pay off for them. Taiwan Inc + Dell just don’t seem to run that way. Furthermore, it is a lot easier to make a product when you control the operating system. You have the experts right there. You don’t have to go through support channels to fix stuff. So ultimately, Microsoft and Google should be able to make much better products than their licensees.
If their OEMs want to compete, really compete, let them spend the extra money required or stop complaining.
Move Over, iPad
Both Microsoft and Google can compete at around the same price as Apple (for the WindowsRT and Android versions) but no less. Microsoft’s Windows 8 version will cost more due to the Intel tax and the Windows license tax. Both Microsoft and Google will likely use Tegra as the System On Chip (SOC) since it is the fastest out there and closest to the A5X. All the other components are available. Apple can cause component supply problems for Microsoft, and possibly some component pricing issues as a result. But Microsoft and Google will just swallow these extra costs in the beginning to remain price-competitive.
The real question is software. iPads suck for productivity apps like Office. This is of course by design, because Jobs did not value the enterprise market (that’s the blind spot). Both Microsoft and Google tablets will be aimed squarely at that spot, with the Microsoft tablet being essentially an Office/Exchange machine and the Google tablet dedicated to Google Apps.
Email is okay on the iPad, but full Outlook compatibility is still missing (contacts, calendaring still seem to miss from time to time). So Microsoft can win with the executive & enterprise audience, although I have to assume that Apple is going to provide a major revamp of iWork one of these days, plus an integrated iPad/Macbook Air type of productivity product.
Ironically, IT departments will be attracted to the Microsoft tablet, especially, because it won’t be as locked-down as the iPad -- a complete reversal of policy.
Target: Enterprise
Today, PCs basically beat Apple on price but not much else. Most people and companies buy PCs despite their inferiority in every respect then try to defend their decision on the grounds of personal preference or corporate policy. Microsoft and Google won’t have a price advantage with tablets, so they’ll have to actually make a better product, or win on corporate policy which is getting harder to do. That’s why succeeding in tablets is important for both companies but absolutely vital for Microsoft.
Microsoft has a chance in this space if the embedded/xBox team are given the tools to do it right. But my bet is that they won’t break through to the knuckleheads on the Office and Windows teams who desperately want to protect their margins and still struggle with usability.
Google is the dark horse here. Their tablet will presumably come from Motorola Mobility and will run Google Apps like crazy. But Google, like Microsoft, doesn’t have a reputation for quality that’s required to compete in this space. Editor's Note: All indications are ASUS as manufacturer for Nexus tablet.
With Jobs gone and Apple finally allowed to take some cues from the marketplace, what these two tablets may do most of all is awaken a sleeping giant, filling him with terrible resolve.
Fortunately that can only be good for consumers.
Reprinted with permission.
Microsoft’s Hollywood announcement Monday of its two Surface tablet computers was a tactical triumph but had no strategic value for the world’s largest software company because the event left too many questions unanswered. If I were to guess what was on Microsoft CEO Steve Ballmer’s mind it was simply to beat next week’s expected announcement of a Google branded tablet running Android. Microsoft, already playing catch-up to Apple’s iPad, does not want to be seen as following Google, too. So they held an event that was all style and no substance at all.
This is not to say that Microsoft shouldn’t make a tablet and couldn’t make a good one, but this particular event proved almost nothing.
Microsoft announced two tablets but only one was shown. No prices and few specs were announced. The clever keyboard cover mentioned in all stories (including this one) wasn’t functional. No reporters thought to count the ports on the sides of the one tablet available for use and they couldn’t look at their pictures to count them later because they weren’t allowed to take any that showed the sides.
What Microsoft did was play well the mystery card, copying Apple, though I’m not sure how well that will work the next time. To their credit, though, when Google’s tablet is covered here and everywhere next week you can bet the Surface line will get nearly as much comparative play as Apple’s iPad.
Now you See It, Now You Don't
With that out of the way let’s consider what are Microsoft’s expectations for a tablet, which are more diverse than one might expect.
Several stories pointed out that building a Microsoft branded tablet might alienate Redmond’s long list of hardware OEMs. While this is true, I’d suggest you look at it another way. I have over the last 25+ years attended dozens of high-profile Microsoft events for products that never made it to market. Knowing that, my first instinct said this was a Microsoft threat more than anything else.
Look back to Microsoft’s many antitrust defenses and you’ll see they threatened just about every OEM at some point. Bullying is in Microsoft’s DNA. Their legal defense was that they never intended to follow through which, by the way, didn’t work with the judges, either.
So does Microsoft really intend to introduce these tablets? Probably. Could something happen to change that determination? Sure.
One really good reason for announcing such vaporous products under the Microsoft brand is that novelty has dissuaded many commentators from questioning the whole enterprise. Microsoft is being given the benefit of the doubt based on what, a kickstand?
It's About Exchange
So here’s what I’ve been able to figure out about the two Surface machines and where they might be positioned. For one, the ARM-based unit had an nVIDIA Tegra2 processor like most of the Android tablets. The Win8 unit will use an Intel Atom.
It’s puzzling to think how Microsoft will position these tablets. But having scratched my head a lot I’ve decided their story will be that these are the corporate tablets. They’ll run Exchange really, really well, come packed already with Office, and if your IT department is comfortable with Windows, well they’ll be comfortable with these tablets, too.
It’s weak, I know, but that’s the best I could come up with, folks. Sorry.
Microsoft can’t claim these tablets are better than the iPad, and I didn’t see a word to that effect in any of the stories (I wasn’t invited to the L.A. event). They might try to compete on price, but they don’t seem to be doing that either. Nor can they, really, since Apple makes its own CPUs and Microsoft doesn’t. How can Microsoft undercut Apple on price? Maybe by thinning margins, but these tablets aren’t going to leave Redmond with a $100 bill taped to the bottom. Those days are over.
Windows is always playing catch-up to OS X just as these tablets are to the iPads. While we’ll see instances of design brilliance, like that kickstand, not even Microsoft expects their product to be in any way broadly superior to the iPad.
So Microsoft is vying here for second place and the comparison that really counts is with next week’s Google tablet, not the iPad.
Reprinted with permission
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
My recent series of columns on troubles at IBM brought me many sad stories from customers burned by Big Blue. I could write column after column just on that, but it wouldn’t be any fun so I haven’t. Only now a truly teachable lesson has emerged from a couple of these horror tales and it has to do with US IT labor economics and immigration policy. In short the IT service sector has been shoveling a lot of horseshit about H1B visas.
The story about H1B visas is simple. H1B’s are given for foreign workers to fill US positions that can’t be filled with qualified US citizens or by permanent US residents who hold green cards. H1Bs came into existence because there weren’t enough green cards and now we’re told there aren’t enough H1B’s, either. So there’s a move right now in Washington to increase the H1B limit above the current level of approximately 65,000 because we are told the alternative is IT paralysis without more foreign workers.
Says who?
Who says there will be chaos without more foreign IT workers and are they correct?
Cynics like me point out that foreign workers are paid less and, more importantly, place much less of a total financial burden on employers because they get few, if any, long term benefits. I tend to think the issue isn’t finding good workers it’s finding cheap workers. But the H1B program isn’t supposed to be about saving money, so that argument can’t be used by organizations pushing for higher visa limits. All they can claim is a labor shortage that can only be corrected by issuing more H1Bs.
Big Blue goes Dark
To test this theory let’s look at Memphis, TN, where IBM has recently lost two big customers. One of them, Hilton Hotels, dumped IBM only this week. The other company is ServiceMaster. Hilton just announced they are canceling almost all of their contracts with IBM less than two years into a five-year contract. This includes global IT helpdesk all data centers, and support of "global web" (hilton.com and all related systems).
According to my sources at Hilton, the IBM contract was a nightmare. Big Blue couldn’t keep Hilton’s Exchange servers running. The SAN in the Raleigh data center hasn’t worked right since it supposedly came up in January, with some SAN outages lasting more than a day. IBM couldn’t monitor Hilton’s servers in the IBM data center. Hilton had to tell IBM when the servers were running low on disk space, for example.
Now IBM is gone, replaced by Dell, and Hilton has a new CIO.
If there’s one point I’d like you to keep in mind about this Hilton story it’s IBM’s apparent inability to monitor the Hilton servers. More about that below.
ServiceMaster is the other former IBM customer I know about in Memphis. Among its many beefs with Big Blue, ServiceMaster also had a server monitoring issue. In this case it was the company’s main database that was going unmonitored. IBM was supposed to be monitoring the servers, they were paid for monitoring the servers, but in fact IBM didn’t really monitor anything and instead relied on help desk trouble tickets to tell it when there was a problem. If you think about it this is exactly the way IBM was handling server problems at Hilton, too.
Now to the part about labor economics.
When ServiceMaster announced its decision to cancel its contract with IBM and to in-source a new IT team, the company had to find 200 solid IT people immediately. Memphis is a small community and there can’t be that many skilled IT workers there, right? ServiceMaster held a job fair one Saturday and over 1000 people attended. They talked to them all, invited the best back for second interviews, and two weeks later ServiceMaster had a new IT department. The company is reportedly happy with the new department, which workers are probably more skilled and more experienced than the IBMers they replace.
No Labor Shortage Here
Where, again, is that IT labor shortage? Apparently not in Memphis.
About that database monitoring problem, ServiceMaster hired DBADirect to provide their database support from that high-tech hotbed, Florence, KY. The first thing DBADirect did was to install monitoring tools. Remember IBM didn’t have any monitoring running on the ServiceMaster database.
How can a company 1/100,000th the size of IBM afford to have monitoring? Well, it seems DBADirect has its own monitoring tools and they are included as part of their service. It allows them to do a consistently good job with less labor. DBADirect does not need to use the cheapest offshore labor to be competitive. They’ve done what manufacturing companies have been doing for 100+ years -- automating!
Even today IBM is still in its billable hours mindset. The more bodies it takes to do a job the better. It views monitoring and automation tools as being a value added, extra cost option. It has not occurred to them you could create a better, more profitable service with more tools and fewer people. When you have good tools, the cost of the labor becomes less important.
Which brings us back to the H1B visa issue. Is there an IT labor shortage in the USA that can only be solved with more H1B visas? Not in Memphis and probably not anywhere else, either.
There’s certainly a shortage of imagination, absolutely a shortage of integrity, and neither shortage is saving anyone money.
Reprinted with permission
Photo Credit: Cory Thoman/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Last in a series. In part one, we learned how important crowd funding can be for helping tech startups and the economy. In part two, we worried about how criminals and con men might game the eventual crowdfunding system when it starts in earnest next January. And in this final part I suggest a strategy for crowdfunding success that essentially comes down to carpe diem– seize the day!
Crowdfunding done right will have a huge positive impact on any economy it touches. But by done right I mean done in a manner that maximizes impact and minimizes both corruption and unnecessary complexity. This is not something that must be accomplished specifically through strict regulation, either. I’m not opposed to regulation, just suspicious of it. I’m suspicious of any government policy that purports to be so elegant as to accomplish economic wonders at little or no cost. That just hasn’t happened in my fairly long lifetime so I see no reason to expect things to change.
Seize the Day from Regulators
Left to their own devices regulators will either make the system too tough to function or too loose to protect. So I propose a different approach, one that sidesteps the regulators to some extent simply by preempting them.
Understand there are no crowdfunding regulations yet, nor is there even a designated US crowdfunding regulator. That’s all coming in the next few months on a timetable that will most likely be dictated more by Presidential election politics than anything else. So I expect to see little progress until after the November election.
Most crowdfunding startups will use this time to raise funds and sit on their hands as they wait to see how they’ll be allowed to make money after January 1. I see this as a perfect time for moving forward, though, to seize the day and attempt to define the regulatory conversation. This is possible since anything that already exists when the regulators are chosen and the regulations written must be taken into account, as opposed to secret or yet-to-be-written plans that can have no impact at all.
Bill Gates is Right
Microsoft cofounder Bill Gates told me something back in 1990 that I’ve always remembered. “The way to make money in technology,” he told me in Redmond the day before he took over as Microsoft CEO, “is by setting de facto standards.”
MS-DOS was a de facto standard as are Windows and Microsoft Office. Those three products created the greatest concentration of wealth since OPEC.
Gates knew what he was talking about.
With crowdfunding nascent and only next year appearing in anything like its grownup form, there’s a chance right now to define those de facto standards. Some of this has already been done by sites like KickStarter that have defined crowdfunding as project-based, for example. While it may seem minor or obvious for individuals to put money in projects that excite them, for US investors that’s generally a new thing. We traditionally bought shares in GM, not in the Chevy Volt.
See? Things are already different.
Tug-of-War
I see a huge opportunity in defining the way crowdfunding projects are presented to investors. This format, not regulations, will be what keeps the crowdfunding business honest and will allow it to be successful. Regulations, remember, exist mainly to deter bad guys by defining how regulated activities are not supposed to be done. Regulations are used to justify lawsuits, not build fortunes, so we can’t expect regulators to become our crowdfunding coaches.
We have in business a fundamental tug-of-war between companies and regulators. I see this with my kids (where I’m the regulator) and can see it emerging already in the crowdfunding space. The question comes down to how much businesses will be allowed to get away with? Remember the very essence of the JOBS Act was removing regulatory restrictions, so this has been on the menu all along. If crowdfunds are allowed to get away with a lot there will be plenty of action in the space until the bubble eventually bursts and investors are hurt. Or if the regulators are wary then strict regulations may scare crowdfunds away entirely so this opportunity will have been squandered and the JOBS Act will be seen entirely as having been election grandstanding.
The trick is to get beyond this concept of how much or how little to regulate and simply install a system that works well on its own and can function in virtually any regulatory environment through the simple expedient that this one part of the system isn’t even regulated.
Huh?
Crowdfund Communications
I’m not describing a crowdfund here, because crowdfunds will inevitably be regulated and should be. I’m describing the medium through which crowdfunds communicate with investors -- a medium that ought logically to be run by uninterested third parties.
Say you are a crook who wants to start a crowdfund to steal the savings of elderly but adventurous investors. The essence of that crime would be failing to deliver on the investment -- taking the money but giving back little or nothing in return. This could be accomplished by selling something the crowdfund had no right to sell or it could be accomplished by selling something that wasn’t as it was portrayed -- a bad investment or perhaps no investment at all -- in which case it would be pure theft.
Now here’s where it gets interesting. It is possible, through the use of financially uninterested third parties with completely different business models, to keep both types of corruption from happening, while lowering costs for all parties.
Crowdfunds selling securities they have no right to sell can be preempted by having a registry of which crowdfunds can legally sell which investments. The SEC could do this, but the New York Times could do it just as well (and with a lot more style), since it comes down to simply validating the identity of the investment, the crowdfund, and their business relationship.
The crime of misrepresentation (claiming an investment is better than it really is) can be regulated, too, but it’s easier and intrinsically better to create a standardized, publicly available, and perfectly transparent archive of due diligence materials held by a third party. If that third party determineseverything that has to be disclosed about every crowdfunded investment and those requirements are pretty comprehensive, then it becomes extremely difficult to misrepresent the deal, cheating investors.
Crowdfunds and investments alike would register with the uninterested third party. Information disclosed to potential investors would be comprehensive, standardized (nothing left out) and completely public, somewhat like a really well done Multiple Listing Service for real estate. And while crowdfunds presumably make their money from fees paid by startups, the uninterested third party would make his money in some way that is economically decoupled from the investment transaction, like advertising.
I like this idea a lot – so much that I’ve decided to do it myself.
My third-party crowdfunding due diligence service will be cleverly disguised as a YouTube channel called the Startup Channel.
Reprinted with permission
Photo Credit: Cory Thoman/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Not all of Apple’s new and upgraded products were even mentioned in Monday’s Worldwide Developer Conference keynote. I was especially interested in Apple’s tower computer, the Mac Pro, which was both upgraded and killed at the same time.
The Mac Pro is Apple’s machine for media professionals. With up to 12 CPU cores, 64 gigs of RAM and eight terabytes of disk storage, it is a very powerful machine aimed at video editors, DNA sequencers, and anyone else who needs a supercomputer under their desk. And on Monday Apple upgraded the Mac Pro for the first time in two years, adding faster processors, better GPU options (it has, remember four PCI Express slots), and interesting SSD options. But what Apple didn’t upgrade was the Mac Pro’s USB ports to USB 3.0.
That told me the Mac Pro is doomed.
Product lines come and go all the time at computer companies, even at Apple. But this simple decision not to go to USB 3.0 shows that Apple has no further plans for the Mac Pro beyond this model. They didn’t rev the motherboard. And what makes that worth writing about is the Mac Pro is Apple’s only expandable product. There are no card slots, no extra drive bays, no GPU options on any other Apple products.
Apple has effectively killed its last conventional computer.
Taking away customer options, especially customer-installed options, will make Macs more reliable and easier to support. But what about the power users?
Apple will eventually have to explain to those folks how less is more and how this new world is even better for them. I think I know how Apple will do it.
When the Mac Pro dies for good Apple will replace it in the market with a combination of Thunderbolt-linked Mac Mini computing bricks backed up by rented cloud processing, all driven from an iMac or MacBook workstation.
I just wonder: when they’ll get around to telling us?
Reprinted with permission
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Second in a series. Legal crowdfunding is coming, as I explained in the first part of this series. Thanks to the Jumpstart Our Business Startups (JOBS) Act, investors big and small will soon have new ways to buy shares in startups and other small companies. This should be very good for growing companies and for the economy overall, but there’s peril for individual investors -- from scammers likely to be operating in the early days of this new law.
Most concerns hearken back to the Banking Act of 1933, enacted to bring order and regulation to the banking industry during the Great Depression. It was the collapse of the banking industry, not the stock market crash, that did most of the damage during the Depression. Also called the Glass-Steagall Act, it established federal insurance for bank deposits, keeping banks in the savings business and out of investing, leaving the trading to stock brokers and investment banks, which were not allowed to take deposits. Glass-Steagall along with the Securities Act of 1933 and the Securities Exchange Act of 1934 established a regulatory structure that many people thought worked well, until 1999 when parts of Glass-Steagall were repealed by the Gramm-Leach-Bliley Act. Sorry for all the legislative history, folks, but you can’t tell the players without a program.
I am not at all sure that we can fully blame the financial bust of 2008 on the repeal of Glass-Steagall, since a lot of bank shenanigans started before that 1999 repeal. Still there’s a place for financial regulations and the protection of smaller investors and the JOBS Act might well open up a number of problem areas even for the best-intentioned entrepreneurs.
Let me give you an example. Last week IndieGoGo, a crowdfund originally intended to support makers of independent (non-studio) films, raised $15 million to expand operations under the JOBS Act. That’s good, right? But a cold breeze is simultaneously rushing through the Indy film community as filmmakers, who have always had to raise money in dribs and drabs, deal with the possible reality that under the JOBS Act they’ll have to reveal possible risk factors to their investors as the SEC requires presently of larger companies.
The old trend was to pitch your movie. The new trend might be pitch your movie then explain how the money, every cent, could be easily lost if anyone dies in the making of the film, if anyone sues for almost any reason, if the weather is too bad, if the star walks out, if, if, if… Hey, this is hard work!
But it won’t be hard for everyone because in the early days of crowdfunding some people will get away with underestimating risks, overselling equity like inThe Producers, etc. This is the fear being spread -- that crowdfunding will bring out the crooks and the con men.
Of course it will, but then so did the transit of Venus this week. Crooks and con men will always be with us.
What’s misunderstood in this is how much we weren’t being protected under the old rules. I am a so-called Qualified or Accredited Investor and have for many years invested in startups as an angel. This is based on my income and/or net worth and is supposed to mean that I have enough money to survive a bad investment or two and am sophisticated enough to be responsible for my own bad decisions. A key point of the JOBS Act is that it removes this requirement allowing anyone to invest in startups.
Now here’s the important part: no entrepreneur, company, fund, or government agency has ever asked me to prove that I have what it takes to be an accredited investor. In my experience, which is pretty broad, this primary requirement that keeps little people from being involved in private equity is based entirely on the honor system.
If we look at how the current system is run, then, all those little guys who have been feeling excluded could have probably been included if only they’d pushed harder.
Frankly, I was operating as a qualified investor long before anyone even asked me the question -- before I even knew the requirement existed. It’s possible, too, that in some of those early days I may not even have been qualified, not that it mattered to me in my glorious ignorance.
Just as Citibank owned Smith Barney before it was theoretically allowed, so, too, lots of startup investors may have been making and losing money in violation of rules they didn’t even know existed.
What’s changed with the JOBS Act and crowdfunding is that what was happening all along is now going mainstream. And going mainstream means that there will be more abuses and more little investors affected, good and bad.
The trick to making this more good than bad is in how the system is designed. And by that I don’t mean how the regulations are written. We can’t rely on the politicians to fix everything because they tend to be self-important dolts. Fortunately there’s a lot we can do ourselves to make crowdfunding a huge success.
And that’s what I’ll cover in the third and final part of this series.
Reprinted with permission
Photo Credit: auremar/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Two days ago, 3,000 important websites, including Google, Facebook, YouTube, and Yahoo as well as many top Internet Service Providers, turned on their IPv6 support and this time they left it turned on. Nothing happened. Or maybe I should say nothing bad happened, which is good, very good.
The world is quickly running out of new IPv4 addresses, with almost 3.7 billion issued. There are two workarounds: 1) complicate the Net further with cascading arrays of Network Address Translation (NAT) servers that slow things down, inhibit native inbound connections like VoIP, and defeat location services both good and bad, or; 2) move to IPv6 with 128-bit addresses (IPv4 is 32-bit) that would allow giving an IPv6 address not only to every person and device but to every sock in everyone’s sock drawer as well, allowing bidirectional communication with hundreds of billions of devices from pacemakers to doorbells. Editor: Yes, but what about the socks that disappear in clothes dryers?
And that’s ostensibly what happened on Wednesday except that about 80 percent of the Net is still running on IPv4 and will be until more and more sites and service providers switch over and more routers are shipped that support IPv6. While some home routers have been IPv6 capable for years, most have not, so expect ad campaigns urging us all to buy new stuff.
I’m there already since Comcast, my ISP, was among those yesterday switching on IPv6 support and my home network has been ready since our move back to California.
We noticed nothing at all on Wednesday, which is the way it should be. No readers messaged me complaining, either. I wish my network had become obviously quicker but it didn’t and probably never will. But a whole new level of utility will be available once IPv4 is turned off, which I hope is soon but probably won’t be for years.
And IPv6 won’t catch the bad guys offguard, either, since they seem to always be ahead of the technical curve. A Comcast VP who has oversight of this area for America’s largest ISP reported that his very first IPv6 email after the cutover was spam.
Reprinted with permission
Photo Credit: nmedia/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
First in a series. When President Obama signed the Jumpstart Our Business Startups (JOBS) Act on April 5th, the era of crowdfunding began as individual investors everywhere were promised an opportunity to gain access to venture investments previously limited to institutions, funds, and so-called qualified investors. Come January 1, 2013, we’re told, anyone can be a venture capitalist, but hardly any of these new VCs will know what they are doing. Spurred by the new law we will shortly see a surge of crowdfunding startups giving for the first time unqualified investors access to venture capital markets. And it will be a quagmire.
Like disk drive startups in the 1980s, each of these new crowdfunds will project 15-percent market share. Ninety-five percent of these funds will fail from over-crowding, under-funding, mismanagement, lack of deal flow, being too late, being too early, or just plain bad luck. A few will succeed and a couple will succeed magnificently, hopefully raising all boats. The point of this column and the two to follow is to better understand this phenomenon and how readers can benefit from it or at least avoid losing their shirts.
Most people think of KickStarter as the archetypal crowdfund and it is one, but KickStarter is not a way to invest in companies. It’s a way to contribute to or pre-order from entrepreneurs with interesting ideas, but not a way to buy stock in those companies because that would violate the Securities Exchange Act of 1934.
But the JOBS Act will loosen some of those Depression-era requirements, allowing small companies more freedom in how they sell their shares and allowing small investors for the first time to actually buy those shares. In general I think this is a good thing. But the devil is in the details and frankly there are no details yet about the crowdfunds made possible by the JOBS Act, because the regulations have yet to be written. In fact the regulating agency hasn’t even been selected. Still, on January 1, 2013 these funds are supposed to be open for business.
The next six months are going to be very busy in Washington, DC.
It’s important to understand that startups are not just important to America, they are vital to our very survival.
Startups Matter
Small companies create most of the new jobs in America. Companies less than five years old generate two-thirds of the new jobs created in the United States each year. Without these startups more jobs would be lost than created, the U.S. economy would permanently shrink and America would eventually lose its superpower status.
The startups that most reliably become giant American corporations and creators of wealth are technology startups. Technology startups assume types and amounts of risk that are not usually tolerated by large companies. Without startups to compete with or acquire, big technology companies would do almost nothing new. In the United States, large companies depend on startups to explore new technologies and new markets.
Startups play a particularly important role in growing jobs out of a recession. New companies produced all of the net new jobs in the United States from 2001-2007, and also from 1980-1983, the last big American downturn prior to The Great Recession. Technology startups are leading us out of our current economic mess, too.
The U.S. technology sector is particularly dependent on startups, which are born and die at astounding rates. Ninety-five percent of technology startups fail -- 95 percent. With odds at 19-to-1 against success, why do entrepreneurs even bother to build these companies? Because the potential rewards are huge (Microsoft and Apple, Cisco and Intel, Amazon and Google were all startups, remember) and for real entrepreneurs there are some things even worse than failure, like boredom or just being like everyone else.
American technology startups change the world all the time and are this countryʼs primary non-military global advantage, though hardly anyone knows that. Encouraging technology startups is the key to keeping America competitive and prosperous, though hardly anyone does that. Technology startups succeed despite these adversities because Americans are full of ideas, startups are so darned fun to do, and they donʼt have to cost that much, either -- sometimes nothing at all.
Technology export sales drive the U.S. economy and technology startups drive US industry, yet in this era of too-big-to-die companies hardly anyone knows about or understands this phenomenon.
Inbred Venture Firms
This is great, but why do we need crowdfunds? Do some research and you’ll learn that traditional venture funds are awash with cash and some are even giving money back to their investors. It would be great if this were a sign of success but it is actually a symptom of failure. Venture returns have been poor for many years for a variety of technical reasons. Some blame it on a lack of good IPOs (compounded most recently by the Facebook debacle) and on The Great Recession, but I think there’s a simpler reason: the venture capital industry has become inbred.
Let’s compare Sand Hill Road (the very center of the Silicon Valley venture business) and ancient Rome.
While all roads may have led to Rome, the city-state was very different from the lands that supported it, just as Sand Hill Road is different from Denver or Detroit. Rome was a center of consumption, it couldn’t produce enough food to feed itself, yet it was also the center of power -- a power that became distorted over time as the ruling class grew distant from the realities of their minions. For Rome this meant Nero and Caligula and for Sand Hill Road it means second- and third-generation VCs who themselves may have never built or run a company. It means funds packed with too much money attracting entrepreneurs whose ideas aren’t so much to change the world or build their dream machine as to get funded and exit in three years.
It’s a reality distortion field that has contributed to the poor results Sand Hill Road has seen in recent years.
The glory years of venture investing are over, or threaten to be, unless new blood, new values, and new ideas make their way in. That’s the value of crowdfunding, which promises to bring vast amounts of new capital to bear, not just on the same old ideas but on a broad array of ideas and business types that could never get through a door right now on Sand Hill Road.
Next, how not to do crowdfunding.
Reprinted with permission
Photo Credit: Kristijan Zontar/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Beta versions of Windows 8 this week lost their nifty Aero user interface, which Microsoft’s top user interface guy now calls "cheesy" and "dated" though two weeks ago he apparently loved it. Developers are scratching their heads over this UI flatification of what’s supposed to become the world’s most popular operating system. But there’s no confusion at my house: Aero won’t run on a phone.
Look at the illustration for connected device growth. It shows projected growth in Internet devices. Keep in mind while reading this that a PC lasts at least three years, a phone lasts 18 months and nobody knows yet how long the average tablet will be around but I’ll guess two years. Adding that knowledge to these sales projections and we can see that mobile devices (phones and tablets) have become the game in software and whoever has been shouting about that at Microsoft is finally being heard.
This chart suggests that Microsoft’s OS and application dominance will quickly decline if they don’t get a whole lot luckier in phones and tablets. So Microsoft is going all-in on mobile and the bet-the-company way to do that is to make sure the Win8 kernel will run on all three platforms. That’s a common code base from top to bottom, which should make Grand Theft Auto be an interesting phone experience, eh? But since phones still aren’t a match for desktop hardware, that means necessarily dumbing-down Windows a bit to help the merge to converge.
You could run Aero on an ARM-based smart phone, but you might not be able to run anything else. Or you could run Aero on your phone but with half the battery life. Not good.
These beta versions are aimed mainly at developers, of course, and that’s the point: mobile development is now more important than desktop -- to everyone.
Microsoft certainly expects (hopes) phone hardware to dramatically improve in the next few months. That’s a good bet. But it also suggests that Windows 8 Phone, if that’s what it is called, will run a lot better on expensive phones than it will on cheap ones. And that’s not a recipe for global domination. Microsoft’s answer, I’m guessing, will be subsidies to smart phone buyers like the $400 PC subsidies we saw in the early broadband era of the mid-1990s.
As a good friend of mine likes to say who knows Microsoft very well, "the Devil always pays cash".
Reprinted with permission
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
This is my sixth column about the Fukushima Daiichi nuclear accident that started last year in Japan following the tsunami. But unlike those previous columns (1,2,3,4,5), this one looks forward to the next Japanese nuclear accident, which will probably take place at the same location.
That accident, involving nuclear fuel rods, is virtually inevitable, most likely preventable, and the fact that it won’t be prevented comes down solely to Japanese government and Tokyo Electric Power Company (TEPCO) incompetence and stupidity. Japanese citizens will probably die unnecessarily because the way things are done at the top in Japan is completely screwed up.
Understand that I have some cred in this space having worked three decades ago as an investigator for the Presidential Commission on the Accident at Three Mile Island and later wrote a book about that accident. I also ran for 20 years a technology consulting business in Japan.
Too Much Cleanup, Not Enough Time
Here’s the problem: In the damaged Unit 4 at Fukushima Daiichi there are right now 1,535 fuel rods that have yet to be removed from the doomed reactor. The best case estimate of how long it will take to remove those rods is three years. Next to the Unit 4 reactor and in other places on the same site there are more than 9,000 spent fuel rods stored mainly in pools of water but in some spots exposed to the air and cooled by water jets. The total volume of unstable nuclear fuel on the site exceeds 11,000 rods. Again, the best estimate of how long it will take to remove all this fuel and spent fuel is 10 years -- but it may well take longer.
Fukushima has always been a seismically active area. Called the Japan Trench Subduction Zone, it has experienced nine seismic events of magnitude 7 or greater since 1973. There was a 5.8 earthquake in 1993; a 7.1 in 2003; a 7.2 earthquake in 2005; and a 6.2 earthquake offshore of the Fukushima facility just last year, all of which caused shutdowns or damage to nuclear plants. Even small earthquakes can damage nuclear plants: a 6.8 quake on Japan’s west coast in 2007 cost TEPCO $5.62 billion.
But last year’s 9.0 earthquake and tsunami made things far worse, further destabilizing the local geology. According to recently revised estimates by the Japanese government, the probability of an earthquake of 7.0 magnitude or greater in the region during the next three years is now 90 percent. The Unit 4 reactor building that was substantially damaged by the tsunami and subsequent explosions will not survive a 7.0+ earthquake.
An earthquake of 7.0 or greater is likely to disrupt cooling water flow and further damage fuel storage pools possibly making them leak. If this happens the fuel rods will be exposed, will get hotter and eventually melt, puddling in the reactor basement and beneath the former storage ponds. This is a nuclear meltdown, which will lead to catastrophic (though non-nuclear) explosions and the release of radioactive gases, especially Cesium 137.
The amount of Cesium 137 in the fuel rods at Fukushima Daiichi is the equivalent of 85 Chernobyls.
To review, there is a 90 percent chance of a large earthquake in the minimum three years required to remove just the most unstable part of the fuel load at Fukushima Daiichi. The probability of a large earthquake in the 10+ years required to completely defuel the plant is virtually 100 percent. If a big earthquake happens before that fuel is gone there will be global environmental catastrophe with many deaths.
A Cultural Problem
Let me explain how something like this can happen. For 20 years I ran with a partner a consulting business in Japan serving some of that country’s largest companies. Here is how our business worked:
1. A large Japanese company would announce a bold technical goal to be reached in a time frame measured in years, say 5-10. This could be building a supercomputer, going to the Moon, whatever.
2. Time passes and in quarterly meetings team leaders are asked how the project is going. They lie, saying all is well, while the truth is that little progress has been made. Though money is spent, sometimes no work is done at all.
3. The project deadline eventually approaches and a junior team member is selected to take the heat, admitting in a meeting that there has been very little progress, taking responsibility and offering to resign. The goal will not be reached, the company will be embarrassed.
4. In a final attempt to avoid corporate embarrassment, the company reaches out to me: surely Bob knows some Silicon Valley garage startup that can build our supercomputer or take us to the Moon. Money is no object.
5. Sure enough, there often is such a startup and the day is saved.
Fix Now, or Pay Later
It’s my belief that this is exactly what’s going on right now at Fukushima Daiichi. The very logic of time and probability that scares the bejesus out of me is being completely ignored, replaced with magical thinking. Organizations are committing to fix the current disaster and avoid the next disaster when in fact they are probably incapable of doing either. Lies are being told because Japanese government and industry are more afraid of their vulnerabilities being exposed than they are concerned about citizens dying. Afraid of being embarrassed, they press forward doing the best that they can, praying that an earthquake doesn’t happen.
This is no way to approach a nuclear catastrophe. What’s even worse is this approach isn’t unique to Japan but is common in the global nuclear industry.
Time is critical. What’s clearly required in Fukushima is new project leadership and new technical skills. Some think the Japanese military should take over the job, but I believe that would be just another mistake. The same foot dragging takes place in the Japanese military that happens in Japanese industry.
Fukushima Daiichi requires a Manhattan Project approach. The sole role of the Japanese government should be to pay for the job. A single project leader or czar should be selected not from the nuclear industry and that leader should probably not be Japanese. Contracts should be let to organizations from any country on equal merit so only the best people who can move the quickest with safety get the work. Then cut the crap and get it done in a third or half the time.
But that’s not how it will happen. In Japan it almost never is.
Reprinted with permission
Photo Credit: Franck Boston/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
With Facebook now public and sitting on a huge pile of cash, let’s turn the conversation to the social network’s most pressing competitor, Google. Google and Google+ don’t appear to present much of a threat to Facebook, but the game board was reset on Friday and tactics at both companies will change accordingly. Now Facebook has to find a way to grow revenue and users and will increasingly bump up against Google’s huge advantages in search and apps. For Facebook to achieve its goals, the company will have to enter both spaces with gusto.
Google has learned how to leverage its strengths and suddenly one of those strengths is Facebook’s success. Now that Facebook is a $100 billion company, it doesn’t hurt Google to be number two in that space. Who else is? Pinterest? Instagram? Twitter? None of those services offer a full-fledged social network for those who do want a Facebook alternative, and some people will.
There’s nothing that unique at Google+ to cause people to leave Facebook for it. But there are compelling reasons why publishers might decide they need to make use of it, chiefly for search rankings. If the publishers think they’ll get better rankings, they’ll help push it along, which means Google+ will continue to grow whether people actually use it or not.
In January, for example, Google added a new box promoting people who are on Google+. If you’re not on Google+, you can’t appear in the box. Do a search for "music", and someone like Britney Spears was showing up. The following week, Lady Gaga, who ignored Google+ up until that point and so didn’t appear in the box, joined. Search for music today, there’s Lady Gaga.
That’s a game changer. Google has used the attraction of its search page to convince publishers to effectively jump start its social network. It probably won’t overtake Facebook. Google+ might always remain a distant second to Facebook. But it has given Google a much more viable competitor than ever before. And if you’re a publisher, you want to be part of it, because it has a huge impact on your visibility in Google search.
The guy who really gets this is Danny Sullivan over at Search Engine Land.
Google has other apps it can leverage like Gmail, for example, and Google Docs. And of course there’s basic search, itself. Looking at Gmail as an example, it has been in Facebook’s interest to keep users communicating inside the social network rather than extending relationships outward through a mail client where they’d risk escaping from Facebook entirely, talking among themselves. In its new role as a grownup, however, Facebook will ultimately have to face the external mail needs of members if it intends to continue subscriber growth, so I’d look for some sort of FMail service with clever social media hooks.
Of course the most obvious way for Facebook to take it to Google would be in basic search, but that’s where I’d see Facebook playing a similar card to Google and accepting number two status in search by acquiring Bing from Microsoft. This makes sense for both companies since it would probably happen as a stock deal with Microsoft increasing its Facebook holdings (and influence). For Facebook buying Bing this week would cost a lot less than it would have last week, since the social network can pay with bloated stock.
There’s no do-or-die in this, but Facebook and Google are lining-up as each other’s main enemies and in order to compete each will start to look a lot more like the other.
Reprinted with permission
Photo Credit: George Lamson/Shutterstock
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Six in a series. So after five parts, one question remains: What will IBM look like by the end of 2015? It will look like Oracle.
With earnings per share meaning everything and a headcount mandate that can’t be achieved without totally transforming the company, IBM is turning itself into something very different. Gerstner’s service business that saved the company 20 years ago will be jettisoned, probably to a combination of US and international buyers.
Look for the Global Services business to be sold to one or more Indian companies while the current federal business will be sold to one of IBM’s US competitors.
Meanwhile, IBM will move its business toward hardware and applications delivered by partners who carry the Service Level Agreement penalty risk.
Before we move on let’s examine that SLA penalty issue because I think there’s an aspect of this that’s misunderstood in the marketplace. A decade ago in one of those Aha! moments that transform corporations, IBM figured out that it was better to ask forgiveness from its customers than to ask permission. Specifically, IBM modeled two competing scenarios:
IBM decided it could make more money, a lot more money, by paying penalties than by actually doing what it was being paid to do.
One individual was rewarded for this stroke of genius, by the way, sanctifying what could be one heck of a class-action lawsuit.
Just like Ford deciding it was cheaper for a few customers to die than to improve Pinto fuel tank safety, IBM decided to deliberately cheat its customers. The result is today’s IBM, rotten to the core.
Good riddance.
Meanwhile, IBM has spent lots of money on software product applications and on self-managing hardware. They want to own (not manage) infrastructure that is now hardware and software, not bodies.
Services profit margins are terrible in comparison with combined software and hardware. This two-sided business model has both customers and partners paying. So in Big Data and Enterprise analytics IBM hopes to own analysis and value-added reporting.
It doesn’t even require squinting to see this as emulating Oracle. Both companies will have big hardware, big data, big applications, but not big numbers of people required by the services model. It’s a transformation of the business that IBM will have no trouble spinning as positive for everyone. Everyone, that is, except the thousands of workers about to be let go.
I wonder how they’ll spin that?
Also in this series: "The downfall of IBM"; "Why is IBM sneaking around?"; "It's a race to the bottom, and IBM is winning"; "How do we just fix IBM?"; "IBM as at a tipping point".
Reprinted with permission.
Photo Credits: Alfred Lui (top); wendelit teodoro/360Fashion (Cringley -- below)
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Fifth in a series. When I was growing up in Ohio, ours was the only house in the neighborhood with a laboratory. In it the previous owner, Leonard Skeggs, had invented the automated blood analyzer, pretty much creating the present biomedical industry. Unwilling to let such a facility go to waste, I threw myself into research. It was 1961 and I was eight years old.
I was always drawn to user interface design and quickly settled, as Gene Roddenberry did in Star Trek half a decade later, on the idea of controlling computers with voice. Using all the cool crap my father (a natural scrounger) dragged home from who knows where, I decided to base my voice control work on the amplitude modulation optical sound track technology from 16mm film (we had a projector). If I could paint optical tracks to represent commands then all I’d need was some way of analyzing and characterizing those tracks to tell the computer what to do. But the one thing I didn’t have down in the lab in 1961 was a computer.
That’s what took me to IBM.
Suits and Punched Cards
I wrote a letter to IBM CEO T.J. Watson, Jr., pecking it out on an old Underwood manual typewriter. My proposal was simple -- a 50/50 partnership between IBM and me to develop and exploit advanced user interface technologies. In a few days I received a letter from IBM. I don’t know if it was from Watson, himself, because neither my parents nor I thought to keep it. The letter invited me to a local IBM research facility to discuss my plan.
I wore a suit, of course, on that fateful day. My Dad drove me, dropping me at the curb and telling me he’d be back in a couple of hours. It was a different era, remember. The car, a 1959 Chrysler, was blue with cigarette smoke.
Inside the IBM building I met with six engineers all dressed in dark suits with the skinny ties of that era, the tops of their socks showing when they sat down.
They took me very seriously. The meeting, after all, had been called by T.J. Watson, himself.
Nobody said, "Wait a minute, you’re eight".
I made my pitch, which they absorbed in silence. Then they introduced me to their interface of choice, the punched card.
Uh-oh.
Stunning Silence
Thirty years later, long after he retired, I got to know Homer Sarasohn, IBM’s chief engineer at that time. When I told him the story of my experience with IBM he almost fell off his chair laughing. My ideas were good, Homer said, they were just 40 years too early. In other words they were still 10 years in the future when Homer and I were talking a decade ago.
The message that came over clearly from those IBMers back in 1961, by the way, was that they were a little embarrassed by their own lack of progress. Terminals weren’t even common at this point but they were coming. If I could have offered them a more practical magic bullet, I think they might have grabbed it.
So when I write these columns about IBM and you wonder why and where I am coming from, it’s from that boyhood experience of a huge company that took me seriously for a morning, possibly changing my life in the process.
Alas, that IBM no longer exists.
My recent IBM columns have stirred up a lot of interest everywhere except in the press. One reporter called from Dubuque, Iowa, but that’s all. This is distressing because the story I’m telling has not been contradicted by anyone. Nobody, inside or outside IBM, has told me I have it wrong. In fact they tend to tell me things are even worse than I have portrayed.
As a reporter I know there are always more stories than I have time or space to write, but this silence from the U.S. business press is deafening. Here is a huge news story that is being completely ignored. It’s not that it has gone unseen. I know who subscribes to the RSS feed for this column and it includes every major news organization in America and most of them in the world.
It’s one thing to be unheard and another to be ignored. This strikes me as an editorial decision based on not pissing-off an advertiser, which should make us all sad.
Editor: We didn't ignore you, Robert, which is why we asked your permission to repost this series in its entirety. You're right, this story isn't getting the deserved attention. The other stories in this series: "The downfall of IBM"; "Why is IBM sneaking around?"; "It's a race to the bottom, and IBM is winning"; "How do we just fix IBM?"
The Big Sell-off
But back to IBM. A curious paradox has emerged in this story. There have been apparently at least two versions of the so-called Pike’s Peak presentation laying out IBM’s personnel plans to meet its 2015 financial goals. Both versions date from last summer with one saying US head count will be cut 78 percent by the end of 2015 and the other saying head count for the USA and Canada will be cut by 85 percent in that time.
If both presentations are legit, what does this mean for IBM Canada?
It suggests to me that IBM intends to withdraw from Canada entirely, possibly serving Canadian customers remotely from the USA or maybe direct from India. I haven’t been told this. All I am doing is a calculation on the back of an envelope, but that’s the way the numbers look to me.
Maybe the Globe and Mail should pay attention.
IBM is at a tipping point. This week I’ve been told job offers from IBM are half of what they were last week as new policies start to take effect. IBMer after IBMer has reported to me draconian cuts that will make it very difficult (maybe impossible) for Big Blue to fulfill its contactual obligations in the event of a regional (multi-customer) crisis like an earthquake or hurricane.
Contradicting my first column in this series, even IBM workers on federal contracts including defense and national security accounts are being affected.
What the heck is going on here?
I have a theory, of course.
I think huge parts of IBM (especially Global Services) are being readied for sale. Fixating solely on the bottom line, IBM appears to be cutting every expense it can in order to goose earnings and make those divisions being put on the block look more valuable.
Are buyers really that stupid?
They probably are. All it takes is an auction environment with one snowed bidder to force the eventual buyer to knowingly pay more than the assets are worth.
It’s a clever tactic but if a disaster happens before such a sale can close, it will have been an experiment with terrible consequences for IBM customers.
Reprinted with permission.
Photo Credits: nasirkhan/Shutterstock (top); wendelit teodoro/360Fashion (Cringley -- below)
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Fourth in a series. Well it can’t be done from the inside, so it has to be done from the outside. And the only outside power scary enough to get through the self-satisfied skulls of IBM top management is IBM customers. A huge threat to revenue is the only way to move IBM in the proper direction. But a big enough threat will not only get a swift and positive reaction from Big Blue, it will make things ultimately much better for customers, too.
So here is exactly what to do, down to the letter. Print this out, if necessary, give it to your CEO or CIO and have them hand it personally to your IBM account rep. Give the IBM rep one business day to complete the work. They will fail. Then go ballistic, open up a can of whoop-ass, and point out that these requirements are all covered by your Service Level Agreement. Cancel the contract if you feel inclined.
If enough CIOs ask for it, this action will send immediate shock waves throughout IBM. Once Big Blue’s customers find out how long it takes to get this information and they see what they get, then things will get really interesting.
But don’t limit this test just to IBM. Give it to any IT service vendor. See how yours stacks up. Ask your IT outsourcing provider to produce the following:
1. A list of all your servers under their support. That list should include:
Is this list complete? How long did it take your provider to produce the list? Did they have all this information readily accessible and in one place?
2. A report on the backup for your servers for the last 2 weeks.
Is this list complete? How long did it take your provider to produce the report? How often does your provider conduct a data recovery test? If a file is accidentally deleted, how long does it take your provide to recover it? Can your provider perform a "bare metal" restoration? (bare metal is the recovery of everything, the operating system included onto a blank system)
3. A report on the antivirus software on your Windows servers.
Is this list complete? How long did it take your provider to produce the report? When a virus is detected on a server, how is the alert communicated to your IT provider? How fast do they log the event and act on it?
4. A report on your network. It should include:
Is this information complete and current? How long did it take your provider to produce this information? Is this information stored in a readily accessible place so that anyone from your IT provider can use it to diagnose problems?
5. Information on your Disaster Recovery plans. Here is what you want to know:
6. Help desk information. Here is what you want to know:
How long did it take your provider to produce this report? Did they have all the help desk ticket information readily accessible to everyone and in one place?
7. Look for evidence of continuous improvement.
A good IT provider will have the tools to automatically collect this data and will have reports like these readily available. It should be very easy and quick for a good IT provider to produce this information.
A key thing to observe is how much time and effort does it take your IT provider to produce this information. If they can’t produce it quickly, then they don’t have it. If they don’t have it they can’t be using it to support you.
This then will lead you to the most important question: are they doing the work you are paying them for?
Reprinted with permission.
Also in this series: "The downfall of IBM"; "Why is IBM sneaking around?"; "It's a race to the bottom, and IBM is winning".
Photo Credits: Vasyl Helevachuk/Shutterstock (top); wendelit teodoro/360Fashion (Cringley -- below)
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".
Third in a series. The current irrationality at IBM described in my two previous columns, here and here, is not new. Big Blue has been in crazy raptures before. One was the development of the System 360 in the 1960s when T.J. Watson Jr. bet the company and won big, though it took two tries and almost killed the outfit along the way. So there’s a legacy of heroic miracles at IBM, though it has been a long while since one really paid off.
There are those who would strongly disagree with this last statement. They’d say that with its strong financial performance IBM is right now in one of its greater moments. But haven’t we just spent 2000 words showing that’s not true? Successful companies aren’t heartsick and IBM today is exactly that, so the company is not a success.
Looking back over the 35 years I’ve been covering this story I can see in IBM an emotional and financial sine wave as rapture leads to depression then to rapture again, much of it based on wishful thinking. The first IBM rapture I experienced was pre-PC under CEO John Opel, when someone in finance came up with the idea of selling to IBM’s mainframe customers the computers they’d been leasing. Sales and profits exploded and the amazing thing was the company began writing financial plans based not only on the idea that this conversion largess would continue essentially forever but that it would actually increase over time, though obviously there were only so many leases to be sold.
When the conversions inevitably ended, IBM execs were shocked, but Opel was gone by then, which may have set another important precedent of IBM CEOs getting out of Dodge before their particular shit hits the fan. We see that most recently in Sam Palmisano, safely out to pasture with $127 million for his trouble, though at the cost of a shattered IBM.
Thanks for nothing, Sam.
Opel was followed by John Akers, who enjoyed for a time the success of the IBM PC, though Bill Lowe told me that IBM never did make a profit on PCs. No wonder they aren’t in that business today. Akers‘ departure was particular gruesome but it led to IBM looking outside for a leader for the first time, hiring Lou Gerstner, formerly of American Express.
Gerstner created the current IBM miracle of offering high-margin IT services to big customers. It was a gimmick, an expedient to save IBM from a dismal low point, but of course it was soon integrated into IBM processes and then into religion and here we are today with an IBM that’s half IT company, half cargo cult, unable to get beyond Gerstner’s stopgap solution.
Ironically, in Palmisano’s effort to continue Gerstner’s legacy, he destroyed almost every one of his predecessor’s real accomplishments.
Living in Denial
Where will future IBM growth come from? Wherever it comes from, can IBM execute on its plan to grow new businesses using cheap, underskilled offshore talent? If Global Services is struggling to hang on, how well will this work for the new IBM growth businesses coming up? As IBM infuriates more and more of its customers, how long can IBM expect to keep selling big ticket products and services to those very same customers?
Global Services is a mature business that has been around for about 20 years. In IBM’s 2015 business plan big income is expected from newer businesses like Business Analytics, Cloud and Smarter Computing and Smarter Planet. Can these businesses be grown in three to five years to the multi-billion dollar level of gross profit coming from Global Services? Most of these businesses are tiny. A few of them are not even well conceived as businesses. It takes special skills and commitment to grow a business from nothing to the $1 billion range. Does IBM have what it takes?
Probably not.
Do you remember eBusiness? Do you remember On-Demand? These are recent examples of businesses IBM planned to grow to billions in sales, businesses that no longer exist today. Some claim that Blue Gene is shortly to be shuttered, too.
Here’s a simple thought experiment. When it comes to these new software and Internet services, IBM’s competition comes from a variety of companies including Amazon, Apple, Dell, Google, Hewlett Packard, Oracle and others. Does IBM have an inherent advantage at this point against any of those companies? No. Is IBM in any way superior to all of them and thence in a position to claim dominance? No.
IBM isn’t smarter, richer, faster moving, better connected. They may be willing to promise more, but if they can’t also deliver on those promises, any advantage will disappear.
IBM is still buying profitable businesses, of course, imposing on them IBM processes, cutting costs and squeezing profits until customers inevitably disappear and it is time to buy another company. It’s a survival technique but hardly a recipe for greatness.
My opinion is that IBM’s services business profit will continue to decline as they try to cost cut into prosperity. Unless they find a way to grow revenue and provide a quality product (service), they’re either headed for a sell-off of the entire service business, probably to some Indian partner, or to a complete implosion. In short, it’s a race to the bottom, and IBM is winning.
Killing the cash cow
Yes but, readers tell me, that’s just services, not the real IBM.
There is no real IBM, not any longer.
The company has become a cash cow. You never feed a cash cow, just take money out until the cow is dead.
Hardly respect for the individual, eh?
If IBM is planning a 78-percent staff reduction, then that will of necessity involve all USA operations, not just Global Services. Hardware, systems, software, storage, consulting, etc. will all see serious staff cuts. This means IBM could be moving a lot of its manufacturing and product support offshore. Raleigh, Lexington, Rochester, and several other IBM communities are about to lose a lot of jobs.
Every non-executive job at IBM is viewed as a commodity that can be farmed out to anyone, anywhere.
IBM was once so special but today there’s little difference between IBM, AOL, or Yahoo except that IBM has better PR. All three are profitable, something we tend to forget when it comes to AOL and Yahoo. All three are effectively adrift. All three are steadily selling off the bits of themselves that no longer seem to work. When Global Services is gone, what will IBM sell next?
Everything else.
Reprinted with permission.
Also in this series: "The Downfall of IBM"; "Why is IBM sneaking around?"; "How do we just fix IBM?"
Photo Credits: Lisa F. Young/Shutterstock (top); wendelit teodoro/360Fashion (Cringley -- below)
Robert X. Cringely has worked in and around the PC business for more than 30 years. His work has appeared in The New York Times, Newsweek, Forbes, Upside, Success, Worth, and many other magazines and newspapers. Most recently, Cringely was the host and writer of the Maryland Public Television documentary "The Tranformation Age: Surviving a Technology Revolution with Robert X. Cringely".