Q* (pronounced "Q-star") is an alleged internal project by OpenAI dedicated to the application of artificial intelligence in logical and mathematical reasoning. The reported work involves performing math on the level of grade-school students.[1][2][3]

Reactions

In November 2023, Reuters reported rumors that certain employees of OpenAI raised concerns with the company's board, suggesting that Q* might signify the imminent emergence of artificial general intelligence.[1] The New York Times[4] and other outlets[5] later indicated the board received no letter from OpenAI employees.[6]

OpenAI spokesperson Lindsey Held Bolton contested this perspective in a statement conveyed to The Verge, stating, "Mira told employees what the media reports were about but she did not comment on the accuracy of the information." Additionally, a source familiar with the situation informed The Verge that the board never received a letter regarding such a groundbreaking development, and the progress of the company's research did not factor into Altman's abrupt termination.[5] Reuters later reported that Microsoft President Brad Smith rejected the rumors of a dangerous breakthrough saying "There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now."[7]

Reaction from others in the field of AI were also dismissive when it came to claims of artificial general intelligence (AGI). François Chollet, an AI Researcher at Google with work on how to achieve greater generality in artificial intelligence,[8] noted "Every single month from here on there will be rumors of AGI having been achieved internally. Just rumors, never any actual paper, product release, or anything of the sort. The first panic over imminent AGI was circa 2013 about Atari Q-learning by DeepMind. The second one was circa 2016 over Deep RL (partially triggered by AlphaGo)."[9][10] Yann LeCun, Chief AI Scientist at Meta, described the rumors as a "deluge of complete nonsense about Q*."[11]

References

  1. 1 2 Anna Tong; Jeffrey Dastin; Krystal Hu (November 22, 2023). "Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say". Reuters. Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
  2. PRM800K: 800,000 step-level correctness labels on LLM solutions to MATH problems
  3. Let's Verify Step by Step
  4. Roose, Kevin; Newton, Casey. "What's Next for OpenAl, Binance Is Binanceled and A.I. Is Eating the Internet". The New York Times. Retrieved December 2, 2023.
  5. 1 2 The Verge: A recent OpenAI breakthrough on the path to AGI has caused a stir
  6. Newton, Casey. "The OpenAI saga isn't over just yet". Platformer. Retrieved December 2, 2023.
    A report from Reuters that said “several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity.” I can report that the board never received any such letter about Q*. The board never received the letter that Elon Musk posted, either. Last week a letter purporting to be from OpenAI staffers briefly appeared on GitHub. Like the board’s message in firing Altman, it was notably short on specifics. “Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence,” the letter read. Musk posted a link to the letter on X, along with the comment “These seem like concerns worth investigating.” In any case, I’m told, no such letter was ever received by the OpenAI board.
  7. M, Mujiva; Coulter, Martin. "Microsoft president says no chance of super-intelligent AI soon". Reuters.
  8. "To Really Judge an AI's Smarts, Give it One of These IQ Tests". IEEE Spectrum. February 2, 2021. Retrieved August 2, 2021.
  9. @fchollet (November 23, 2023). "Every single month from here on there will be rumors of AGI having been achieved internally. Just rumors, never any actual paper, product release, or anything of the sort" (Tweet) via Twitter.François Chollet, the creator of the Keras deep-learning library and AI Researcher at Google
  10. @fchollet (November 23, 2023). "The first panic over imminent AGI was circa 2013 about Atari Q-learning by DeepMind. The second one was circa 2016 over Deep RL (partially triggered by AlphaGo). So many folks in late 2016 were convinced that Deep RL would lead to AGI in under in 5 years..." (Tweet) via Twitter. – François Chollet, the creator of the Keras deep-learning library and AI Researcher at Google
  11. @ylecun (November 24, 2023). "Please ignore the deluge of complete nonsense about Q*" (Tweet) via Twitter. One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning.
    Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published ideas and results.
    It is likely that Q* is OpenAI attempts at planning. They pretty much hired Noam Brown (of Libratus/poker and Cicero/Diplomacy fame) to work on that.
    [Note: I've been advocating for deep learning architecture capable of planning since 2016].

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.