Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

Intel’s Pat Gelsinger and the AI Centrino Moment

Last week I was at Innovation, Intel’s conference for developers. In the keynote, Intel CEO Pat Gelsinger  said that we were in a Centrino-like wave. Centrino was the Intel wireless technology that gave us Wi-Fi and drove the pivot from desktop computers to the now dominant laptop computers. With AI, Intel has its Neural Processing Unit (NPU) and, coupled with Intel’s new server architectures, this NPU promises to significantly change how we use technology. The implication is that AI will follow the Centrino profile, and that it will take 2.5 years for the full impact of this technology to be felt.

While I agree that AI is far bigger than Centrino, it is also happening far faster. Generative AI has gone from an interesting concept to a massive market and technology driver in just a few months.

Let’s talk about some of the things that were displayed during Gelsinger’s talk that will be game changers when they become more common.

Hearing

Gelsinger wears hearing aids, as do many in his family. People with hearing disabilities are at a disadvantage in meetings because most people don’t realize they have a disability. When they do, they can’t do much about ambient noise. What Gelsinger showed is that an AI-driven multi-channel technology can isolate the sound where the user wants it focused.

(AI-generated image from Shutterstock)

If they are in a Zoom meeting, everything is muted but that Zoom meeting. If someone walks into the room, the earpiece notifies the user that there is someone else needing their attention and then switches the focus to that new person while the software on their laptop does text-to-speech so they can come back to their meeting without missing a word. In addition, when the speaker on the Zoom meeting switched to French, the solution switched to real-time translation so that the user continued to understand the content.

What had been a disadvantage became an advantage. With this capability, someone with a hearing problem who has these AI-driven hearing aids is in a better position than someone with normal hearing because normal hearing doesn’t focus on the speaker, won’t do real-time translation, and the transcription requires this new AI capability as well.

Customized Content

Another interesting demo was creating new Taylor Swift songs that have nothing to do with Taylor Swift. This suggests we may need to revise copyrights. What may need to be copyrighted is an artist’s trade dress, and the type, cadence, and recurring elements that differentiate their art from others. But this also suggests that users may choose to train their AIs to create custom entertainment that they will uniquely enjoy and can share on social media. Songs could be created around your experiences with dating instead of the artists’ experiences, which will make lyrics more personal to the user and allow individuals who aren’t as talented as Taylor Swift to supply and create unique content using AI as a bridge.

(Jeppe Gustafsson/Shutterstock)

The demo moved beyond music to the creation of unique pictures. The AI took a pose from a ballerina, combined it with a picture of an astronaut and then added motion to create unique video content. This opens the door for users to create ever richer content for their own enjoyment or to share with family and friends over services like YouTube for a tiny fraction of the cost of creating that content the traditional way.

Hybrid AI

Behind these efforts was not only Intel’s new NPU but its cloud resources that allow access to extremely large language models and create a blended solution using the power of both PCs and servers. And Intel’s increasing partnership with ARM suggests that this same capability will move to a blend of smartphones and servers, as well, increasing user access and the variety of solutions that will appear. We are moving from a time where applications like this run on the client or the cloud, to a time when they will run on the client and the cloud.

Wrapping Up:

Pat Gelsinger’s keynote at Intel Innovation was fascinating to watch, but I think he is wrong that we are on a Centrino-like wave that will take 2.5 years to emerge. Gelsinger demonstrated that AI apps are moving into the market today, suggesting that emergence is already occurring, and that next year we’ll be up to our armpits in AI-based applications that will use AI resources on smartphones, PCs and the Cloud simultaneously to make use of the best and most available resources for these new AI applications. I think it will change how we work, extend creative innovation across a far broader audience of users and open the door to a massive increase in content.

While Gelsinger proposed that it would take 2.5-years, his presentation proved that this wave is already breaking. We are already in the age of AI.

About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

Related Items:

How to Pick a Generative AI Partner

The Three Approaches to AI Implementation

Should Employees Own the Generative AI Tools that Enhance or Replace Them?

 

The post Intel’s Pat Gelsinger and the AI Centrino Moment appeared first on Datanami.

Enregistrer un commentaire

0 Commentaires