Quantcast
Viewing latest article 34
Browse Latest Browse All 178

"Fear and Tension" Led to Altman's Ouster at Open AI

I imagine, like most humans, you are fascinated by AI but also afraid of what it might do, not only to jobs, and to creative people, but to humanity and our planet as well.

The shocking ouster of Sam Altman at Open AI was both shocking and sudden, and apparently came as a complete surprise to most people in the industry and obviously to Altman himself, no doubt to Microsoft as well which is a major investor.

Apparently, though, the board is deeply worried about the power of AI to really harm humanity and feels safety has to be put ahead of profits.  This is fascinating to watch but I’m not feeling particularly confident right now in our future.  As it is the world feels increasingly dark and insane and AI is already stealing everything we say and do and post online, our art, our imagery; and surely many jobs won’t be far behind.

I haven’t much faith in government to regulate AI either.  It isn’t that people won’t want to try, but getting the government to function even to the point of behaving like decent humans and making a rational budget is getting to be extremely difficult.  

Anyway, I’m encouraged that at least people who have power in this corporation are worried and did something, although it’s clear that they are now talking about having Altman return in some capacity.

This piece in the NYT is a must read if you’re interesting in what’s happened at OpenAI, who’s in charge at the moment, etc.  

Over the last year, Sam Altman led OpenAI to the adult table of the technology industry. Thanks to its hugely popular ChatGPT chatbot, the San Francisco start-up was at the center of an artificial intelligence boom, and Mr. Altman, OpenAI’s chief executive, had become one of the most recognizable people in tech.

But that success raised tensions inside the company. Ilya Sutskever, a respected A.I. researcher who co-founded OpenAI with Mr. Altman and nine other people, was increasingly worried that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Mr. Sutskever, a member of the company’s board of directors, also objected to what he saw as his diminished role inside the company, according to two of the people.

That conflict between fast growth and A.I. safety came into focus on Friday afternoon, when Mr. Altman was pushed out of his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.

The article is published under the byline of Cade Metz who’s been writing about AI since 2015.

This paragraph does not reassure me:

Fears that A.I. researchers were building a dangerous thing have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it.

Why oh why does this feel like atomic weapons V2?

Anyway, the company is now being led by interim chief Mira Murati.  The company made it clear that Altman had done nothing wrong, but they moved because of “extreme fear” of the dangers posed by AI.  I wonder if they have made a breakthrough or seen something that precipitated this move?

www.nytimes.com/...


Viewing latest article 34
Browse Latest Browse All 178

Trending Articles