Open AI co-founder and CEO Sam Altman sat down for an extensive interview with this editor late last week, answering questions about some of his most ambitious personal investments, as well as the future of OpenAI.
There was much to discuss. The now eight-year-old outfit has dominated the national conversation in the two months since the release of ChatGPT, a chatbot that answers questions like a human. OpenAI’s products have not only amazed users; the company is reportedly in talks to oversee the sale of existing shares to new investors at a $29 billion appreciation despite its relative nominal income.
Altman declined to talk about the company’s current business dealings and fired a bit of a warning shot when asked a related question during our sit-down.
However, he did reveal something about the company’s plans for the future. For starters, in addition to ChatGPT and the outfit’s popular digital art generator, DALL-E, Altman confirmed that a video model is also coming, though he said he “wouldn’t like to make a competent prediction about when,” adding that “ it could be quite soon; it is a legitimate research project. It may take a while.”
Altman made it clear that OpenAI’s evolving partnership with Microsoft – which invested in OpenA for the first time in 2019 and earlier today confirmed the plans to include AI tools like ChatGPT in all its products is not an exclusive pact.
Furthermore, Altman confirmed that OpenAI can build its own software products and services, in addition to licensing its technology to other companies. That’s notable for industry viewers who have wondered if OpenAI could one day compete head-to-head with Google through its own search engine. (When asked about this scenario, Altman said, “Anytime someone talks about a technology that will kill another giant company, it’s usually wrong. People forget they can counter-move here, and they’re pretty smart, pretty competent.”)
As for when OpenAI plans to release the fourth version of the GPT, the advanced language model on which ChatGPT is based, Altman would say only that the long-awaited product “will come out at some point when we’re confident we can ability” [release] it safe and responsible.” He too tried to temper expectations regarding GPT-4 by saying that “we don’t have real AGI,” meaning artificial general intelligence, or a technology with its own emerging intelligence, versus OpenAI’s current deep learning models that solve problems and identify patterns through trial and error.
“I think [AGI] is what is expected of us” and GPT-4 will “disappoint” people with that expectation, he said.
In the meantime inquired when Altman expects to see artificial general intelligence, he said it’s closer than one might imagine, but also that the shift to “AGI” won’t be as abrupt as some expect. ‘The closer we get [to AGI]the harder it is for me to answer because I think it’s going to be much more vague and much more of a gradual transition than people think,” he said.
Of course, before we wrap things up, we spent time talking about safety, including whether society has enough guardrails for the technology OpenAI has already released into the world. Many critics believe not, including concerned educators which are more and more to block access to ChatGPT due to fears that students will use it to cheat. (Google, in particular, has reportedly been reluctant to release its own AI chatbot, LaMDA, over concerns about its “reputation risk.)
Altman said here that OpenAI “has an internal process where we try to break things and study the impact. We use external auditors. We have external red teamers. We work together with other labs and let safety organizations check things.”
At the same time, he said, the technology is coming – from OpenAI and elsewhere – and people need to start figuring out how to live with it, he suggested. “There are societal changes that ChatGPT is going to cause or cause. A big discussion going on right now is about its impact on education and academic integrity, all that.” Still, he argued, ‘to start this one [product releases] now [makes sense]where the stakes are still relatively low, rather than just proclaiming what the entire industry will have in a few years with no time for society to update.
In fact, educators—and maybe parents, too—should understand that the genie can’t be put back in the bottle. While Altman said OpenAI and other AI outfits will “experiment” with watermarking technologies and other verification techniques to help assess whether students are trying to pass an AI-generated copy as their own, he also hinted that focusing too much on this specific scenario is trivial.
“There may be ways we can help teachers detect the output of a GPT-like system a little more quickly, but frankly, a determined person will get around them, and I don’t think it’s something society can do.” or have to rely on the long term.”
It won’t be the first time humans have successfully adapted to major shifts, he added. Noting that calculators “changed what we test for in math classes” and that Google made the need to memorize facts much less important, Altman said deep learning models represent “a more extreme version” of both developments. But he argued that the “benefits are also more extreme. We hear from teachers who are understandably very nervous about the impact of this on homework. We also hear a lot from teachers who say, ‘Wow, this is an incredible personal tutor for any child .’”
For the full talk on OpenAI and Altman’s evolving views on AI’s commodification, regulation, and why AI is “moving in exactly the opposite direction” that many imagined five to seven years ago, it’s worth watching the clip below. to watch.
You’ll also hear Altman cover best and worst case scenarios when it comes to the promise and dangers of AI. The short version? “The good thing is just so unbelievably good that you seem crazy to bring it up,” he’d said. “And the bad case — and I think this is important to say — is that the lights are going out for all of us.”