December 26, 2017
With software evolution comes the hardware revision. As advanced AI requires new levels of computational capabilities, the very hardware that fuels it is being redesigned to allow emerging industries and technologies to fast-track development and adoption.
When you talk to investors in Silicon Valley, you’ll still find some skepticism. Why, for example, would companies look to buy faster chips for their training when older cards in an Amazon server may be just as good for their training? And yet there is still an enormous amount of money flowing into this area. And it’s coming from firms that are the same ones that bet big on Uber (though there’s quite a bit of turbulence there) and WhatsApp.
Nvidia is still a clear leader in this area and will look to continue its dominance as devices like autonomous cars become more and more relevant. But as we go into 2018, we’ll most likely start to get a better sense as to whether these startups actually have an opportunity to unseat Nvidia. There’s the tantalizing opportunity of creating faster, lower-power chips that can go into internet-of-things thingies and truly fulfill the promise of those devices with more efficient inference. And there’s the opportunity of making those servers faster and more power-efficient when they look to train models – like ones that tell your car what a squirrel looks like – may also turn out to be something truly massive.
Programming languages for classical computers are designed in a way that doesn’t require developers to know how a central processing unit works. The push now is to create high-level quantum programming languages that also shield developers from the complexities of quantum hardware.
The quirks of quantum computing create limitations that don’t exist in classical programming languages. One example: quantum programs can’t have loops in them that repeat a sequence of instructions; they have to run straight through to completion.