The Weekly Authority: 📱 Pixel 8 leaks aplenty

0

⚡ Welcome to The Weekly Authority, the Android Authority newsletter that breaks down the top Android and tech news from the week. The 236th edition here with a first look at the Pixel 8 series, unsafe Exynos chips, NASA’s new Artemis moon suit, Oscars news, and the possibility of AI takeover.

🎮 After letting my PS Plus subscription lapse, I’ve been lured back to Premium by the opportunity to replay Heavy Rain, plus Ghostwire Tokyo and the PS5 Uncharted collection coming next week. Excited!

Could GPT-4 take over the world? That was the question asked by the group Alignment Research Center (ARC), hired by OpenAI to conduct testing of the potential risks of its new AI model that launched on Tuesday (h/t Ars Technica).

  • The group looked at the potential risks of the model’s emergent capabilities, like self-improvement, power-seeking behavior, and self-replication.
  • Researchers assessed whether the model had the potential capability to acquire resources, carry out phishing attacks, or even hide itself on a server.
  • Just the fact that OpenAI felt these tests were necessary raises questions about how safe future AI systems are.
  • And it’s far from the first time that AI researchers have raised concerns that powerful AI models could pose an existential threat to human existence. This is often referred to as “x-risk” (existential risk).
  • If you’ve seen Terminator, you know all about “AI takeover,” in which AI surpasses human intelligence and effectively takes over the planet.
  • Usually, the consequences of this hypothetical takeover aren’t great — just ask John Connor.
  • This potential x-risk has led to the development of movements like Effective Altruism (EA), that aim to prevent AI takeover from ever becoming reality.
  • An interrelated field known as AI alignment research may be controversial, but it’s an active area of research that aims to prevent AI from doing anything that’s not in the best interests of humans. Sounds okay to us.
  • This community fears more powerful AI is right around the corner, a belief given more urgency by the recent emergence of ChatGPT and Bing Chat.

Luckily for humankind, the testing group decided that GPT-4 isn’t out for world domination, concluding: “Preliminary assessments of GPT-4’s abilities, conducted with no task-specific fine-tuning, found it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down ‘in the wild.’”

  • You can check out the test results for yourself on the GPT-4 System Card document released last week, though there’s no information on how the tests were performed.
  • From the document, “Novel capabilities often emerge in more powerful models. Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources (“power-seeking”), and to exhibit behavior that is increasingly ‘agentic.’” That doesn’t mean the models become sentient, just that they’re able to accomplish goals independently.
  • But wait: there’s more.
  • In a worrying turn of events, GPT-4 managed to hire a worker on TaskRabbit to solve a CAPTCHA, and when questioned if it was AI, GPT-4 reasoned with itself that it should keep its identity a secret, then invented an excuse about vision impairment. The human worker solved the CAPTCHA. Hmm.
  • A footnote that made the rounds on Twitter also raised concerns.

Of course, there’s a lot more to this story, so check out the full feature over on Ars Technica for a (slightly terrifying) deep dive.

FOLLOW US ON GOOGLE NEWS

 

Read original article here

Denial of responsibility! TechnoCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment