It seems there have been a slew of articles on both companies in recent weeks as they both have announced significant layoffs that interestingly enough underline divergent problems although The New York Times in one of those articles referred to them both as last-generation technology companies. Ouch.
Although both Intel and Cisco are faced with declining hardware sales, it appears that Intel is in a better position – due to their strengths in manufacturing, specifically the chip foundry end – to weather the downturn than Cisco.
Intel has not just been taking in other people’s laundry (ARM) to keep its IC plants chugging along but the company has also made two very strategic purchases that has positioned the company to keep current with the changing times.
Although Cisco is/has been on the acquisitions trail as well, it appears none of them helped the company to make equally satisfying moves as Intel. Cisco’s moves have been mostly defensive, following the layoffs announcement with plans to grow its security business and hopefully carve out a significant piece of the promised land of the Internet of Things (IoT).
But will that be enough to offset the plummeting margins from its big core business as the demand for its high-end hardware has been undermined by competing software products, Chinese rivals, and the recent revelations about American spying programs? My personal opinion is probably not.
Intel on the other hand has made some brilliant moves and has effectively changed its (IA) Intel Architecture-centric view of the world and instead has come to see itself for what it is as the world leader in IC manufacturing technologies.
But really its even better than that. Intel can now choose between three different roles: it can be the architect (using it’s own IA, or its new acquisition IP), architect/builder (once again using it’s own IA, or its new acquisition IP), or simply act as the builder.
About a year ago it bought an FPGA company called Altera and just recently acquired another company called Nervana for its machine-learning technology.
By the by, FPGAs (Field Programmable Gate Arrays) are in essence custom silicon that contains processors, memory, I/O and most importantly re-programmable logic blocks (AND, XOR, etc.).
Intel didn’t throw out the baby with proverbial bath water by thinking to replace the default ARM processors with its own IA. This was a smart move for a couple of different reasons: 1) going to IA would have been expensive and demonstrated that Intel couldn’t think beyond using its own architecture, and 2) sticking with the ARM cores kept the existing Altera customer base happy.
Intel acquired Nervana not specifically for its machine-learning ASIC (Application Specific Integrated Circuit) but more so for its machine-learning algorithms and tools.
An interesting note is both FPGAs and ASICs are custom silicon that lend themselves – via the programmable logic blocks – into being engines optimized for running machine-learning algorithms. The way I understand it, machine-learning at a high level is about things like pattern recognition by being able to dynamically model complex statistical relationships among phenomena. More importantly stated, machine-learning is a precursor to artificial intelligence (AI).
And everyone these days are building their own custom silicon – Google, IBM, Facebook, Amazon, and now Intel. Why? Because application specific processing engines gets the job done quicker, smarter, and in the long run cheaper.
So now, Intel is now a significant player in this precursor to AI. Intel not just owns the means of production – soon to be 10nm process – by way of its fabs, but also owns the intellectual property (IP) by way of its acquisitions. Meaning Intel can now build beyond the high-powered generic cloud servers it is doing today (using its own IA) to new hardware outfitted with customizable architecture including ARM cores driving specific machine-learning algorithms.
Last Note: In case you’ve forgotten – or maybe I am writing this to remind myself – anything you can do in software you can do in hardware (but run it way faster). That’s the beauty of the logic blocks that form the inherent functionality of FPGAs and ASICs. Those logic blocks are ideal from upon which to construct algorithms. And the future of computing – AI – for the time being anyway is predicated on the new use of this moderately old technology (ASICs, FPGAs were first introduced in the mid-‘80s?)