🤖OpenAI Begins Training New AI Model

Plus: Google Chromebooks Gain AI Features

Hey Humans! 👋 

Get the most important AI, tech, and science news in a free daily email.

We are testing a new format with Headlines linking to Source and a comprehensive summary of the News.
Your Feedback is Important on it. Kindly Poll

Do You Like the New Format?

Login or Subscribe to participate in polls.

Report

Okay, have you heard about this hilarious experiment with AI language models? 
Researchers basically asked them to pick a random number between 0 and 100 - you know, something a calculator can handle no problem. But get this - these super smart AI assistants totally bombed it in the most amusingly human way possible!

It turns out the likes of GPT-3, Claude, and Gemini are just as bad at being truly random as the rest of us mere mortals.

GPT-3 is absolutely obsessed with 47, Claude has a weird thing for 42 (you know, the Answer to the Ultimate Question from Hitchhiker's Guide), and Gemini can't get enough of 72. Random number OCD is real!

But it gets better. In attempts at randomness, the models copied classic human number biases. They avoided low digits, high digits, round numbers, repeating digits - anything seeming too "patterny."

Instead, they gravitated toward stuff in the middle with an "accidentally on purpose" vibe, like ending in 7. It's like asking a 5-year-old for a "super random, no takebacks" number and they give you 27 every time.

Now of course, these models aren't actually trying to be random at all. They're just computational parrots, spitting back out whatever their training data taught them humans say most often when prompted for a "random" number. Randomness itself is totally beyond their comprehension abilities.

So in an ironic twist, the most human-like flaw they replicated? Our laughably terrible grasp of randomness!
Maybe us inferior meatbags have one tiny advantage over the forthcoming AI overlords?

Who knows? All I know is next time an AI asks me for a random number, I'm hitting it with a stunner like 3 or 98. Got to keep these puppets on their toes!

⚡️
Quick Hits

OpenAI is training a new AI model to succeed GPT-4, aiming for "artificial general intelligence" (AGI), capable of human-level tasks. This model will enhance AI products like chatbots and digital assistants. OpenAI has also formed a Safety and Security Committee to address risks such as disinformation and job displacement. Recent developments include the release of GPT-4o, which can generate images and respond conversationally, though it faced controversy over mimicking Scarlett Johansson's voice. Legal challenges include a copyright lawsuit from The New York Times. Co-founder Ilya Sutskever's departure raises concerns about AI safety, with John Schulman now leading safety research efforts.

Jan Leike, a prominent AI researcher who recently resigned from OpenAI, has joined Anthropic to lead a new "superalignment" team. This team will focus on AI safety and security, such as scalable oversight and automated alignment research. Leike will report to Anthropic's chief science officer, Jared Kaplan, and restructure current researchers under his leadership. Anthropic, founded by ex-OpenAI members including CEO Dario Amodei, positions itself as more safety-focused. This move comes after OpenAI dissolved its Superalignment team, which Leike co-led.

Google has unveiled new AI-powered features for its $350 Chromebook Plus laptops, enhancing productivity and creativity. Key details include the integration of the Gemini assistant on the home screen, the “Help Me Write” feature for AI text suggestions, the Magic Editor in Google Photos for advanced image editing, and customizable AI wallpapers and video call backgrounds. This vmove highlights Google's commitment to bringing advanced AI capabilities to budget-friendly devices, signaling a new era of accessible, AI-infused computing.

Helen Toner, a former OpenAI board member, revealed on The TED AI Show podcast that the board was unaware of ChatGPT's 2022 launch until seeing it on Twitter. She discussed the events leading to CEO Sam Altman's firing in November, citing his lack of transparency about safety processes and his involvement with OpenAI’s startup fund. Bret Taylor, current board chief, refuted Toner's claims, noting an independent review found no safety, security, or financial issues. Toner and Tasha McCauley have called for government regulation of AI, arguing OpenAI cannot self-regulate effectively.

🤖
AI IMAGE

🎁
Miscellaneous

🚀 
Tools

ZeroTrusted AI - Safeguards AI privacy, enabling secure interactions with LLMs. Ensures data integrity and confidentiality through encryption and context-preserving techniques. Launched as SaaS for enhanced digital privacy.

Fotor Video Enhancer - An online tool utilizing AI for effortless video quality enhancement. Supports popular formats, offers specific adjustments, and user-friendly interface.

ThinkAny - An AI search engine employing RAG technology for retrieving and aggregating high-quality content, coupled with intelligent answering features.

Pulse AI - Instant UX analysis for websites and apps, now with image analysis. Tailor recommendations, track personas, and optimize journeys globally with multiple languages.

Muse Pro - An advanced drawing app for iPhone and iPad integrating real-time AI to augment creativity. Supports Apple Pencil with pressure sensitivity, intuitive controls, and fine-tuned AI collaboration.

Feedback

What`d you think of today`s edition?

Login or Subscribe to participate in polls.

If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.

Join the conversation

or to participate.