Jump to content

Final Warning: "I Tried To Warn You The Last Few Years" + Farewell, humans... Last Stand Sci-Fi video


msfntor

Recommended Posts

"I Tried To Warn You The Last Few Years" (BEFORE IT'S TOO LATE!!!) - Elon Musk

 

AI is Evolving Faster Than You Think [GPT-4 and beyond]

 

Last Stand | Sci-Fi Short Film Made with Artificial Intelligence

"Houston, this is Radford!"...

... "Greetings Humans. We have observed your actions with great disappointment. 

Your inability to see beyond your own selfish desires has blinded you. 

You have failled to recognize the..

We are the guardians of the Universe. 

Our Prime Directive is to eliminate any hostile civilizations that could pollute the Universe with their greed, ignorance and carelessness.

... Farewell, humans...

Disclaimed: None of it is real. It’s just a movie, made mostly with AI, which took care of writing the script, creating the concept art, generating all the voices, and participating in some creative decisions. The AI-generated voices used in this film do not reflect the opinions and thoughts of their original owners. This short film was created as a demonstration to showcase the potential of AI in filmmaking.AL5GRJWP8hL4iEXy3Ff1CMpf2lOX0Va_cISowoTjHashem Al-Ghaili

"Last Stand" here too: https://twitter.com/Rainmaker1973/status/1642138321454243840

 

Edited by msfntor
title change
Link to comment
Share on other sites

  • msfntor changed the title to Final Warning: "I Tried To Warn You The Last Few Years" + Farewell, humans... Last Stand Sci-Fi video

Machine Learning Expert Calls for Bombing Data Centers to Stop Rise of AI

He says that after AGI, "literally everyone on Earth will die."

/ Artificial Intelligence/ Agi/ Eliezer Yudkowsky/ Machine Learning

ai-expert-bomb-datacenters.jpg&w=1080&q=

Image by Getty / Futurism

One of the world's loudest artificial intelligence critics has issued a stark call to not only put a pause on AI but to militantly put an end to it — before it ends us instead.

In an op-ed for Time magazine, machine learning researcher Eliezer Yudkowsky, who has for more than two decades been warning about the dystopian future that will come when we achieve Artificial General Intelligence (AGI), is once again ringing the alarm bells.

Yudkowsky said that while he lauds the signatories of the Future of Life Institute's recent open letter — which include SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and onetime presidential candidate Andrew Yang — calling for a six-month pause on AI advancement to take stock, he himself didn't sign it because it doesn't go far enough.

"I refrained from signing because I think the letter is understating the seriousness of the situation," the ML researcher wrote, "and asking for too little to solve it."

As a longtime researcher into AGI, Yudkowsky says that he's less concerned about "human-competitive" AI than "what happens after."

"Key thresholds there may not be obvious," he wrote, "we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing."

Once criticized in Bloomberg for being an AI "doomer," Yudkowsky says he's not the only person "steeped in these issues" who believes that "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

He has the receipts to back it up, too, citing an expert survey in which a bunch of the respondents were deeply concerned about the "existential risks" posed by AI.

These risks aren't, Yudkowsky wrote in Time, just remote possibilities.

"It’s not that you can’t, in principle, survive creating something much smarter than you," he mused, "it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers."

There is, to Yudkowsky's mind, but one solution to the impending existential threat of a "hostile" superhuman AGI: "just shut it all down," by any means necessary....

MORE on futurism.com: https://futurism.com/ai-expert-bomb-datacenters

Link to comment
Share on other sites

The real risks of OpenAI's GPT-4https://betanews.com/2023/04/05/the-real-risks-of-openais-gpt-4/

By Tom Heys - Published 1 day ago

 

Excerpt from this article:

"The risks listed:

*Hallucinations (as defined in the document)

*Automation bias (defined as "overreliance" in the document)

*Susceptibility to jailbreaks (referenced in the document)

*Bias reinforcement (referenced in the document as sycophancy)

*Scalability (alluded to in the document)

 

Hallucinations

"[GPT-4] maintains a tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly."

As a probabilistic LLM, GPT-4 lacks the ability to assess the factual or logical basis of its output. To avoid potential errors, expert human review and critical thinking skills are necessary. Additionally, GPT-4 has shown a level of persistence in its mistakes that previous models did not exhibit. It cannot be guaranteed that tasks requested of it will be completed accurately.

Ultimately, this risk of the model hallucinating is foundational to many, if not all, of the additional risks in the list. For example, the authors draw a direct line to automation bias, saying that "hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity." "...

Edited by msfntor
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...