• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Linus Torvalds: AI is currently 90% marketing and 10% reality

winjer

Member



Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland's Millennium Technology Prize for the creation of the Linux kernel, was interviewed during the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends.

The outspoken technologist said that modern generative AI services are an interesting development in machine learning technology, and that they will eventually change the world. At the same time, he expressed his dissatisfaction with the "hype cycle" which is fueling too many AI-related initiatives and contributing to Nvidia's impossibly high market evaluations.

Everyone and their dog is currently talking about AI, or sticking some AI-based cloud service together, or funding an AI-focused multi-million startup somewhere in the world. Torvalds hates the hype cycle so much that he doesn't even want to go there. The developer is essentially ignoring everything AI, though things will likely change in a drastic way a few years from now.
In five years, Torvalds said, generative algorithms and machine learning tech will become much more useful and interesting. At that point, the entire world will be able to understand how AI can actually be used and what types of daily workloads it can "accelerate." The Linux creator isn't alone in his distrust for modern AI capabilities, with Baidu's CEO recently stating that 99 percent of today's "AI companies" will soon go the way of the (digital) dodo.

Admittedly, AI as a marketing term, has become so overused. It has gotten so bad, that some companies lie about having AI features in their products.
In the 80s and 90s, we had Turbo and Lasers everywhere. Today, it's AI.

Basketball Ok GIF by Malcolm France
 

SJRB

Gold Member
He’s right in the sense that most “A.I.” solutions companies provide are just GPT wrappers.

It’s all marketing fluff but it works because the technologically inept can’t stop talking about pure nonsense like “implementing AI in the customer journey”. Bro what does that mean, specifically?

The higher up on the company foodchain you go, the wilder these meetings become.

“We should let AI handle customer support”. Dude what are you talking about? Which part?

It’s hilarious because the noobs get filtered immediately.
 

od-chan

Gold Member
In the 80s and 90s, we had Turbo and Lasers everywhere. Today, it's AI.

Yes and no. Media/Marketing will overhype any new technology, just like they did with the internet too. Some of it will be legit good though, and AI is gonna be that.

We just have to figure out what excatly AI will be and how it's gonna do that. In that sense, Torvalds is obviously not wrong, but he's not saying anything profound either. I figure he has to get asked this a lot, nothing else you can say really.
 

winjer

Member
Yes and no. Media/Marketing will overhype any new technology, just like they did with the internet too. Some of it will be legit good though, and AI is gonna be that.

We just have to figure out what excatly AI will be and how it's gonna do that. In that sense, Torvalds is obviously not wrong, but he's not saying anything profound either. I figure he has to get asked this a lot, nothing else you can say really.

You didn't watch his interview. Otherwise you would have seen he has a very positive look on AI.
He just doesn't like the marketing and hype non-sense going around.
 

od-chan

Gold Member
You didn't watch his interview. Otherwise you would have seen he has a very positive look on AI.
He just doesn't like the marketing and hype non-sense going around.

The 60 second "interview"? That I watched, since I was hoping in fact for something more profound. Not faulting the man for giving a very straight answer for such a benign question, as I said already.
 

gothmog

Gold Member
I agree with him. It is really hard to find the signal in the noise right now. At the same time I use it every day. Why write a quick script to process some data when you can just ask an assistant to do it in one sentence? Saves me a ton of time.
 

Toots

Gold Member
Torvalds hates the hype cycle so much that he doesn't even want to go there. The developer is essentially ignoring everything AI, though things will likely change in a drastic way a few years from now.
Same.
 

dave_d

Member
I've said this before but the way they talk about AI reminds me of the internet in the mid to late 90s. A lot of hype but it has a ton of potential. The thing is finding real uses for it. (And if there's an AI bubble burst I figure it's basically just like the dot com bubble burst in 2000. Things only went up from there as people figured out what to do with the net.)
 

rm082e

Member
In case anyone missed it, Goldman Sachs had a report on AI recently. The money people aren't convinced there's significant value here, and it seems like the typical game of musical chairs where they're trying to stay in as long as they can make money off the hype train:

Given the focus and architecture of generative AI technology today... truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years.

AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

Allison Nathan: If your skepticism ultimately proves correct, AI’s fundamental story would fall apart. What would that look like?

Jim Covello: Over-building things the world doesn’t have use for, or is not ready for, typically ends badly. The NASDAQ declined around 70% between the highs of the dot-com boom and the founding of Uber. The bursting of today’s AI bubble may not prove as problematic as the bursting of the dot-com bubble simply because many companies spending money today are better capitalized than the companies spending money back then. But if AI technology ends up having fewer use cases and lower adoption than consensus currently expects, it’s hard to imagine that won’t be problematic for many companies spending on the technology today.

That said, one of the most important lessons I've learned over the past three decades is that bubbles can take a long time to burst. That’s why I recommend remaining invested in AI infrastructure providers. If my skeptical view proves incorrect, these companies will continue to benefit. But even if I’m right, at least they will have generated substantial revenue from the theme that may better position them to adapt and evolve.
 

jason10mm

Gold Member
I think most of the consumer facing AI is bunk; it seems impressive what Alexa can do, and certainly at some levels it IS impressive, but its not real AI and there is a massive team of humans back there fixing her, listening to conversations, and getting her to say stuff that is really just the same 50 requests over and over from everyone.

The artistic stuff is similar, its just aping what other humans have done after a mountain of work cataloguing stuff with keywords the AI can blindly interpret. How many MILLIONS of pretty girl portraits was it fed to generate the AI models?

Call me when we have an AI running all of our traffic lights, with cameras that can correctly SEE traffic flow, track cars to anticipate rush hours, and optimize everything for each individual driver to get to their destination. Then I'll get excited.
 

ResurrectedContrarian

Suffers with mild autism
I've said this before but the way they talk about AI reminds me of the internet in the mid to late 90s.
It's a fair comparison since the arrival of the internet was one of the most significant technological turning points of human history; there is a clear "before/after the internet took over" and everything is different on either side of that line.

AI is another of that magnitude; in both cases, it takes a few years to fully have its impact, but it will absolutely be looked back on as equal in magnitude to the dawning of the internet age, or possibly even greater.

I think most of the consumer facing AI is bunk; it seems impressive what Alexa can do, and certainly at some levels it IS impressive, but its not real AI and there is a massive team of humans back there fixing her, listening to conversations, and getting her to say stuff that is really just the same 50 requests over and over from everyone.
Alexa is simply awful and years behind on LLM technology; I don't know why, but Amazon has been a total failure on this front. Don't judge anything of current AI based on Alexa.

I agree. And it is not really AI. It is predictive text. Useful, but not AI.
This again.

Predicting the next word is the training, not the comprehension. This excerpt from Ilya using the analogy of a mystery novel will help:



In short: knowing which word will come next requires total comprehension of a text. These models are incredible when asked to complete highly complex research papers, novels filled with character, literature reviews, etc -- because they grasp the entirety of the logic of the text, its arguments and sub-topic and even implications, and that is what they use to know how to continue the text word by word. There is no other way to do it at that level.

But furthermore, this is exactly how its reasoning works in practice, and I use it daily. I have a long thread just this morning with Claude (a better AI than ChatGPT, for what it's worth; better at highly technical discussions and code) where I'm working out an approach to a complex data engineering task. Claude does not simply predict likely sequences, but understands the entire context and all its twists and turns. For example, I made a small mention of how I'll be using the data, and much later in the discussion Claude sees a conflict between certain kinds of entropy in the data as it would be extracted by my script compared to the kind of signal/noise content that I need for the objective; so it points this out, explains the contradictions, intelligently proposes alternative ways to think out the problem, etc. It's also better than most junior and even lately even mid-level engineers at writing complex code.
 

K' Dash

Member
You didn't watch his interview. Otherwise you would have seen he has a very positive look on AI.
He just doesn't like the marketing and hype non-sense going around.

you expect people around here with enough attention span to watch a 30 sec video?

you must be new.

I work in the tech industry, it's amazing that I have people asking me everyday how to implement AI in their businesses for the sake of saying they use it, when I ask them what issue they have they think AI will solve, they don't know, lol.

You must identify your areas of improvement, then search for a possible AI solution to implement.
 

winjer

Member
you expect people around here with enough attention span to watch a 30 sec video?

you must be new.

I work in the tech industry, it's amazing that I have people asking me everyday how to implement AI in their businesses for the sake of saying they use it, when I ask them what issue they have they think AI will solve, they don't know, lol.

You must identify your areas of improvement, then search for a possible AI solution to implement.

That reminds me of this comic strip.

Yt4HKtI.jpeg
 

rm082e

Member
I work in the tech industry, it's amazing that I have people asking me everyday how to implement AI in their businesses for the sake of saying they use it, when I ask them what issue they have they think AI will solve, they don't know, lol.

You must identify your areas of improvement, then search for a possible AI solution to implement.

I'm a manager at a small tech shop focused on a specific industry. Our owner got the "data analytics" hype bug years ago and decided we had to push in that direction because it was the "next big thing". Some people tried to tell him the facts, but he had gold in his eyes and couldn't hear them. He invested a bunch of money, hired a bunch of very expensive people, separate office, 2+ years of work, and then we finally got their algorithm. It was designed to review a data set and spot trends in the data that indicate impending problems.

The mathematical model took so long to build up enough confidence that there was going to be a problem, a human could sit and watch a graph that updated every 5 minutes and more quickly identify that a problem was inevitable. Once they crunched the numbers on the compute side and factored in profits to get a retail price, customers could have literally paid a rotating shift of low paid workers to visually watch a graph and it would have been cheaper. At that point, a couple of engineers (who thought the whole analytics project was BS) wrote a basic "if this, then that" type chunk of code to automatically monitor data values based on parameters provided by the end user. They did this in their spare time, ran it against the same data set as the analytics model was using, and proved that it was almost zero cost and faster.

The owner was incredibly embarrassed. He laid off the whole analytics team, closed that office, etc. It was a huge blow to our company.

The best part is once we put this new multi-variable monitoring into our app, almost no one has used it. We show it to customers, explain how it can help them, they nod along, and they never bother to even play with it. :messenger_pensive: It was a huge learning lesson for me as a product manager.
 

Trogdor1123

Member
It’s about right. I was using ChatGPT the other day and it was a disaster. I gave it a document with a list and asked it to summarize each document. It just started making stuff up. It changed titles, content, everything. It took forever to get it to work. Once it worked though it was decent enough.
 

6502

Member
AI is costing thousands of job cuts in my work. It is being used to automate tasks and give prompts to managers. We lost far more jobs to software "robots" which did exactly the same thing before "ai" was a thing, before ai we also moved to iproductivity software, because "i" everything was a thing after the iphone.

It is just the new term to make moron managers feel good about implementing things which have been possible and ongoing since the 80s.

There sure is potential in real neural networks but it is a way off before we get skynet.
 
Last edited:

ReBurn

Gold Member
AI as we know it is largely the same machine learning and semantic models we've had for years, just with really fast compute behind them. Parsers and lexers have been a thing for ages. They just didn't get fast until someone decided to run them on GPU's.
 

Trilobit

Member
Due to how clearly ChatGPT understands me in both of my non-English languages compared to Siri etc. I wish that specific functionality would be incorporated into the OS. I want to be able to tell my phone in my language: "okay, so put up an appointment in the start of next week at this time that I need to meet [name] and that I have to prepare a vegan meal before that. Please remind me three hours before and also put three recommendations for easy to cook meals."

I don't need anything more extravagant than that. It would also be nice to be able to change settings: "I accidentally flipped my display image 90 degrees., can you return it to its original orientation? Also, I don't like having Spotify autostarting every day, turn that function off."
 
Last edited:

ResurrectedContrarian

Suffers with mild autism
AI as we know it is largely the same machine learning and semantic models we've had for years, just with really fast compute behind them. Parsers and lexers have been a thing for ages. They just didn't get fast until someone decided to run them on GPU's.
But this depiction is another misconception.

Parses and lexers are analogous to what constituted NLP for ages, up until the seminal works on transformers completely displaced that work and jettison all of it in favor of a simple architecture that has no dependencies. There is no more manual semantic work of POS tagging, sentence structure analysis, dependency trees etc in today's LLM stack, as there used to be for language models; instead, it's a direct sequence of chopped up segments of words fed in and the model learns the lowest level things like syntax/grammar/etc implicitly at the same time that it learns higher order concepts, all unsupervised simply by iterating over massive quantities of text.
 

Melon Husk

Member
But this depiction is another misconception.

Parses and lexers are analogous to what constituted NLP for ages, up until the seminal works on transformers completely displaced that work and jettison all of it in favor of a simple architecture that has no dependencies. There is no more manual semantic work of POS tagging, sentence structure analysis, dependency trees etc in today's LLM stack, as there used to be for language models; instead, it's a direct sequence of chopped up segments of words fed in and the model learns the lowest level things like syntax/grammar/etc implicitly at the same time that it learns higher order concepts, all unsupervised simply by iterating over massive quantities of text.
Large language models *are* NLP. They solved it. Nothing more, nothing less, in my opinion. We should bring back that word. Not AI. CNNs solved image recognition, did it change the world? Not yet. edit: I can't wait to have NLP running locally in every device I own. It will be very useful!
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
My biggest annoyance is when people try to act like "AI's" are doing things when it's clearly just software coded to do a specific thing. There was a Twitter thread that went viral a couple of weeks ago acting like "chat bots" were doing all these crazy things. Was a total grift because the person who wrote the apps linked a crypto wallet to one of the "chat bots" (aka him) and got people to send it a bunch of ctypto coins. In the end all it was surrounding was an LLM generated a meme, after clearly being told to generate memes.. lol
 

ResurrectedContrarian

Suffers with mild autism
Large language models *are* NLP. They solved it. Nothing more, nothing less, in my opinion. We should bring back that word. Not AI. CNNs solved image recognition, did it change the world? Not yet. edit: I can't wait to have NLP running locally in every device I own. It will be very useful!
Could get too technical if we keep going, but: of course they are part of NLP, but I was responding to the bit about "parsers and lexers" and that is traditional NLP, which was totally supplanted by the LLM revolution.

Previously, you would indeed train models to segment words, assign part of speech, learn dependency order of words and clauses and modifiers, etc -- and stack all of these to computationally build a syntactical parsing of a natural language. Then you'd use some kind of embedding to project the words into a semantic space for further clustering etc.

But LLMs brought in a new paradigm of "just let it learn all of language's lowest, syntactical, and highest abstract reasoning concepts all at once by simply churning through massive amounts of text from scratch." And then it turned out that these generative models were better than the state of the art parsers or syntax analyses simply by prompting them, eg. asking even GPT2 "given these sentences, describe the parts of speech and grammatical rules" and having it answer you directly was producing better results than the entire state of the art from prior manual language modeling. Very cool; the takeaway is that generative models end up becoming better at modeling and understanding the complexities of their domain than any other kind of analytical or manual modeling.

As for images, even CNNs are actually being replaced by transformers in all newer image models, because even images are better understood as a language in the same way. But that's another conversation.
 

Sophist

Member
Doomers about AI like Elon Musk are the most ridiculous. It's a computer; you unplug the power cord and the bad AI is gone.
 

Clear

CliffyB's Cock Holster
AI is clearly in a market bubble scenario right now, and bubbles inevitably burst. Which isn't to say that AI is not a technology of promise, just that there's a long way to go yet.

Most notably the brewing conflict with copyright and IP law which is going to get extremely bloody. And this is a huge deal because ultimately AI cannot "create", only derive from existing forms using metadata. LLM's are fine because noone owns language, but applying this to anything that's owned and valued is a whole other thing.
 

chakadave

Member
AI is clearly in a market bubble scenario right now, and bubbles inevitably burst. Which isn't to say that AI is not a technology of promise, just that there's a long way to go yet.

Most notably the brewing conflict with copyright and IP law which is going to get extremely bloody. And this is a huge deal because ultimately AI cannot "create", only derive from existing forms using metadata. LLM's are fine because noone owns language, but applying this to anything that's owned and valued is a whole other thing.
This sounds great.

If words can’t be owned then no data can be.
 

IntentionalPun

Ask me about my wife's perfect butthole
AI is clearly in a market bubble scenario right now, and bubbles inevitably burst. Which isn't to say that AI is not a technology of promise, just that there's a long way to go yet.

Most notably the brewing conflict with copyright and IP law which is going to get extremely bloody. And this is a huge deal because ultimately AI cannot "create", only derive from existing forms using metadata. LLM's are fine because noone owns language, but applying this to anything that's owned and valued is a whole other thing.
LLM’s train on all kinds of copyrighted content. That issue isn’t just for image or video based Gen AI whatsoever.

But there are substantial public domain data sets, companies willing to license data; or hell, a billion people they can pay cheap money to generate content to train on.
 

Clear

CliffyB's Cock Holster
This sounds great.

If words can’t be owned then no data can be.

The real winners are going to be the lawyers! If you consider the sheer amount of money tied up with intellectual properties, it seems to me that the people who currently own/control it -be it music, art, whatever- will be extremely motivated to protect their investments.
 

StueyDuck

Member






Admittedly, AI as a marketing term, has become so overused. It has gotten so bad, that some companies lie about having AI features in their products.
In the 80s and 90s, we had Turbo and Lasers everywhere. Today, it's AI.

Basketball Ok GIF by Malcolm France

Machine learning has been a thing for a long long time... it was always just the latest buzzword.

But there most definitely are uses for it in business so I disagree it will take so long. Many companies find great success with machine learning already.
 

Lord Panda

The Sea is Always Right
I’ve been using AI a lot for coding and scripting, and it’s amazing what it can do with just a bit of guidance. As long as you know how to explain what you need, give it a nudge in the right direction now and then, and actually understand the results, AI becomes a game-changer for software engineering and automation work. It’s like having Stack Overflow in your back pocket, only faster and more personalised.

That said, it’s definitely a tool best used if you already have some knowledge of what you’re working on. Without some expertise, it could be frustrating—or even goddamn risky. It's not hype; and it's becoming an essential tool across all software and infrastructure teams.
 
Last edited:
Top Bottom