<td id="kg486"><optgroup id="kg486"></optgroup></td>
<button id="kg486"><tbody id="kg486"></tbody></button>
<li id="kg486"><dl id="kg486"></dl></li>
  • <dl id="kg486"></dl>
  • <code id="kg486"><tr id="kg486"></tr></code>
  • As Nvidia expands in artificial intelligence, Intel defends turf

    Aug 20, 2018

    「As Nvidia expands in artificial intelligence, Intel defends turf」的圖片搜尋結果

    Nvidia Corp dominates chips for training computers to think like humans, but it faces an entrenched competitor in a major avenue for expansion in the artificial intelligence chip market: Intel Corp

    Nvidia chips dominate the AI training chip market, where huge amounts of data help algorithms "learn" a task such how to recognise a human voice, but one of the biggest growth areas in the field will be deploying computers that implement the "learned" tasks. Intel dominates data centres where such tasks are likely to be carried out.

    "For the next 18 to 24 months, it’s very hard to envision anyone challenging Nvidia on training," said Jon Bathgate, analyst and tech sector co-lead at Janus Henderson Investors.

    But Intel processors already are widely used for taking a trained artificial intelligence algorithm and putting it to use, for example by scanning incoming audio and translating that into text-based requests, what is called "inference."

    Intel’s chips can still work just fine there, especially when paired with huge amounts of memory, said Bruno Fernandez-Ruiz, chief technology officer of Nexar Inc, an Israeli startup using smartphone cameras to try to prevent car collisions.

    That market could be bigger than the training market, said Abhinav Davuluri, an analyst at Morningstar, who sees an inference market of $11.8 billion by 2021, versus $8.2 billion for training. Intel estimates that the current market for AI chips is about $2.5 billion, evenly split between inference and training.

    Nvidia, which posted an 89 percent rise in profit Thursday, hasn’t given a specific estimate for the inference chip market but CEO Jensen Huang said on an earnings call with analysts on Thursday that believes it "is going to be a very large market for us."

    Nvidia sales of inference chips are rising. In May, the company said it had doubled its shipments of them year-over-year to big data centre customers, though it didn’t give a baseline. Earlier this month, Alphabet Inc’s Google Cloud unit said it had adopted Nvidia’s inference chips and would rent them out to customers.

    But Nvidia faces a headwind selling inference chips because the data centre market is blanketed with the CPUs Intel has been selling for 20 years. Intel is working to persuade customers that for both technical and business reasons, they should stick with what they have.

    Take Taboola, a New York-based company that helps web publishers recommend related content to readers and that Intel has touted as an example of how its chips remain competitive.

    The company uses Nvidia’s training chips to teach its algorithm to learn what to recommend and considered Nvidia’s inference chips to make the recommendations. Speed matters because users leave slow-loading pages.

    But Taboola ended up sticking with Intel for reasons of speed and cost, said Ariel Pisetzky, the company’s vice president of information technology.

    Nvidia’s chip was far faster, but time spent shuffling data back and forth to the chip negated the gains, Pisetzky said. Second, Intel dispatched engineers to help Taboola tweak its computer code so that the same servers could handle more than twice as many requests.

    "My options were, you already have the servers. They’re in the racks," he said. "Working with Intel, I can now cut back my new server purchases by a factor of two because I can use my existing servers two times better."

    Nvidia has been working to solve those challenges. The company has rolled out software this year to make its inference chips far faster to help overcome the issue of moving data back and forth. And it announced a new family of chips based on a technology called Turing earlier this week that it says will be 10 times faster still, Huang said on the analyst call Thursday.

    "We are actively working with just about every single Internet service provider in the world to incorporate inference acceleration into their stack," Huang said. He gave the example that "voice recognition is only useful if it responds in a relatively short period of time. And our platform is just really, really excellent for that."

    Source: telecom


    Copyright ? 2017, G.T. Internet Information Co.,Ltd. All Rights Reserved.
    主站蜘蛛池模板: 91亚洲精品第一综合不卡播放| 伊人久久大香线蕉| 久久人人爽人人爽人人片AV超碰| 国产精品制服丝袜一区| 欧美一区2区三区4区公司贰佰| 国内自产一区c区| 亚洲欧美日韩小说| 69影院毛片免费观看视频在线| 欧美香蕉爽爽人人爽| 国产色视频一区二区三区QQ号| 国产真实夫妇交换| 亚洲а∨精品天堂在线| www.99在线| 爱福利极品盛宴| 国产麻豆91网在线看| 亚洲国产成人手机在线电影bd| bbbbbbbw日本| 日韩在线观看中文字幕| 国产亚洲人成网站观看| 中文字幕在线视频免费观看| 精品无码无人网站免费视频| 日韩人妻系列无码专区| 国产亚洲AV人片在线观看| 中文字幕亚洲综合久久男男| 精品亚洲麻豆1区2区3区| 无需付费看视频网站入口| 噜噜噜噜私人影院| jlzzjlzz亚洲乱熟无码| 欧美高清xxx| 国产波多野结衣中文在线播放| 久久夜色精品国产噜噜亚洲AV| 人妖在线精品一区二区三区| 欧美成人在线视频| 国产成年无码v片在线| 久9这里精品免费视频| 精品国产三级a在线观看| 在线中文字幕日韩欧美| 五月婷婷中文字幕| 激情综合网五月| 成年免费A级毛片免费看无码| 国产亚洲3p无码一区二区|