Jump to content

“Be mindful. Be grateful. Be positive. Be true. Be kind.”


XPerceniol

Recommended Posts

Andy Stanley Gets Into Heated Debate With Bible

CHURCH·Jan 30, 2023 · BabylonBee.com

ATLANTA, GA — Senior Pastor Andy Stanley was sidetracked while delivering a sermon Sunday morning at North Point Community Church when the Bible began disagreeing with one of his points. The pastor and author pulled up a stool and set the Bible up so they could hash matters out publicly for the benefit of church membership.

According to sources, Stanley initiated the debate after coming across a passage from the fourth chapter of Hebrews.

"You say you're sharper than any two-edged sword, but you look like a bundle of paper to me," Stanley argued. "There's no way you could cut into human flesh to dissect the heart and judge anyone's intent! After all, you were written by a bunch of desert people who didn't even go to college!"

The Bible said nothing in response. The awkward silence caused Pastor Stanley to fill the void with more incoherent speech.

"I think we ought to cut ties with you because you're a big stumbling block for a lot of people. When I tell them the Bible says such-and-such they get very offended," Stanley said. "Which means we're all free to decide what's right for ourselves, right?"

The word of God sat motionless on its stool. Conviction sat in the air, heavy like molasses.

Andy Stanley considered the debate an absolute win when the inanimate book did not spontaneously come to life and start speaking with a human mouth.

Church membership is reportedly divided on who won the debate.

Here: https://babylonbee.com/news/andy-stanley-gets-into-heated-debate-with-bible

Link to comment
Share on other sites


Complex solar halo in North Dakota, USA on January 19, 2023…

2b70c842-88a1-4d33-8dfb-c13b6b1871d6_750

like above, from: https://strangesounds.substack.com/p/wait-wait-whats-this-yes-its-just

Home pagehttps://strangesounds.substack.com/archive

...AND:

Extreme tornadoes ravage parts of Texas on January 24, 2023…

Footage of a destructive tornado in Texas! Storm caused massive damage in Pasadena and Deer Park

by World Is Dangerous

...and you, legacyfan, al is OK with you?...

Edited by msfntor
Link to comment
Share on other sites

7 hours ago, msfntor said:

Complex solar halo in North Dakota, USA on January 19, 2023…

2b70c842-88a1-4d33-8dfb-c13b6b1871d6_750

like above, from: https://strangesounds.substack.com/p/wait-wait-whats-this-yes-its-just

Home pagehttps://strangesounds.substack.com/archive

...AND:

Extreme tornadoes ravage parts of Texas on January 24, 2023…

Footage of a destructive tornado in Texas! Storm caused massive damage in Pasadena and Deer Park

by World Is Dangerous

...and you, legacyfan, al is OK with you?...

thanks @msfntor I'm fine it's just kind of cold here in the 20s and there predicting a wintery mix here tomorrow so the weather is a little bit down right now (were also under a winter storm warning now) also the city I live in just declared schools closed citywide at least we still have power

Edited by legacyfan
Link to comment
Share on other sites

Trending: criminal scientists cheat with artificial intelligence

By Guest writer

January 27, 2023

By Martina Frei

“These schemes are spreading like cancer. We are heading for a crisis. We can’t just let this continue,” says Bernhard Sabel, and in doing so, the Institute of Medical Psychology director at Otto von Guericke University Magdeburg, Germany, sounds seriously concerned.

At the end of 2020, the professor heard about “paper mills” for the first time. These writing rooms, of which no one knows who is behind them, offer their services to scientists.

The customers can choose those who have completed their research project and hand over their data to the “paper mill,” which then writes the manuscript and arranges for publication in a scientific journal.

“That costs about 1,000 euros,” says Bernhard Sabel, who has looked at various offers.

For 26,000 euros, you can get a freely invented “scientific” publication

For around 8,000 euros, the “paper mill” unceremoniously creates a manuscript, writes it, and publishes it in a scientific publishing house.

The customers act as authors.

“All the prospective author has to do is name a specific field, possibly include a few keywords or methods, and select a journal,” according to an article in“Labor journal.”

According to Sabel, the “all-around package” is available for 17,000 to 26,000 euros (US$17,000 to 26,000).

At the end of 2020, the professor heard about “paper mills” for the first time. These writing rooms, of which no one knows who is behind them, offer their services to scientists. (Photo internet reproduction)

For this price, a “paper mill” provides the design for a research project, supposedly conducts the experiments – which in reality never take place – writes a manuscript with the invented data, inserts pictures and graphics, and sends it (contrary to general practice) to several scientific journals at the same time – and gets the go-ahead for publication from an editorial office.

With more than 50,000 scientific journals, the choice is vast.

A REAL INDUSTRY HAS DEVELOPED THERE

“The more prestigious the journal, the higher the price,” says Sabel.

“To be sure, fakes have always existed and always will. But the mass, global, industrial production of completely fabricated scientific articles – that’s new and very worrying. In recent years, an entire industry has developed there.”

These fake studies and articles are written by artificial intelligence (AI) trained on millions of articles. Sometimes scientists provide editorial assistance.

“The texts are so sophisticated that no one can tell anymore.”

Bernhard Sabel, director of the Institute of Medical Psychology at Otto von Guericke University in Magdeburg, Germany

“I was shocked to learn at a recent congress how well AI writes such technical articles,” Sabel says. “In the past, manuscripts written by AI still contained linguistic or logical errors – now the texts are so polished and of such high quality that no one can tell anymore.”

Another ploy of the “paper mills”: they translate Russian technical articles, for example, and submit the translation to an English-language journal.

Sabel knows of an AI test in the U.S. in which a scientific publication that helped Italian nuclear physicist Enrico Fermi win the Nobel Prize in 1938 was translated with AI, edited, and sent to a prestigious journal.

“It was accepted as worthy of publication, but not published because the whole thing was only meant as a test.”

DOZENS OF SPECIALITIES ARE AFFECTED

Paper mill articles were a big problem, especially in medicine and computer science.

“These are not isolated cases,” says Sabel, who is involved with the issue on the extended executive committee of the German Academic Association.

]He says dozens of other disciplines are also affected, including psychology, sociology, business administration/marketing, agricultural sciences, and philosophy.

Shortly after he learned about “paper mills,” Sabel discovered that 10 to 15 of about 200 articles reviewed had been problematic in the neuroscience journal he is editor-in-chief.

“We were more affected than I could have imagined. It did worry me.”

Sabel estimates that about ten percent of published articles in neuroscience journals are “highly suspect.”

Clear proof that a paper comes from a “paper mill” is only possible in individual cases. In most cases, this is not known with certainty, says Sabel.

OF 1,000 MEDICAL ARTICLES, 238 WERE PRESUMABLY FABRICATED...

...MORE: https://www.riotimesonline.com/brazil-news/modern-day-censorship/trending-criminal-scientists-cheat-with-artificial-intelligence/

Link to comment
Share on other sites

Neither desperate, nor depressed, nor terrorists: nihilism is not what you have been told

1/31/2023, 12:17:14 PM

    

Nihilists do not think that everything is meaningless. The philosopher Jesús Zamora Bonilla reflects in his new book the evolution of this philosophical current so modern

A broken shell.CS0523183 (Getty Images/iStockphoto)

A fairly neutral definition, but one that for that very reason allows us to fit into it almost all the forms of nihilism that have ever been considered, I would say that this current is something like the following: "Nihilism is the loss of confidence in anything from which absolute values could emanate, above all moral or existential values, that is, values that give meaning to our existence”.

A way to further summarize this definition would say that nihilism consists in believing that existence is meaningless.

Of course, nihilism does not consist in "believing in nothing", nor even in "believing that everything is nothing" (although some people, I don't quite understand why, can actually believe such an absurd thesis, and there is no problem in calling them nihilists as well), but rather consists in not believing in anything, well understood that the "beliefs" to which these definitions refer are not of the type "I think I have entered the mobile password wrong", Rather, it is above all moral beliefs, beliefs about what gives meaning to our lives and to human history.

In other words, nihilism consists in the belief that nothing has absolute value (that is,

there is nothing that has absolute value).

What nihilism does not consist of is the belief that "the only thing that has absolute value is nothingness", or something like that, since as perceptive nihilists we know perfectly well that there is nothing that is "nothingness".....

MORE: https://newsrnd.com/news/2023-01-31-neither-desperate--nor-depressed--nor-terrorists--nihilism-is-not-what-you-have-been-told.H1HG6dO83i.html

Link to comment
Share on other sites

Let's familiarize ourselves with rule 2.b, no political discussions please. Not everyone is going to agree with your beliefs, we do not want that sort of drama here. Thank you!

Link to comment
Share on other sites

Princeton computer science professor says don't panic over 'bul***** generator' ChatGPT

by Sindhu Sundar

ChatGPT, a AI chat bot, has gone viral in the past two weeks. 

A Princeton professor told The Markup that "bul***** generator" ChatGPT merely presents narratives.

He said it can't be relied on for accurate facts, and that it's unlikely to spawn a "revolution."

ChatGPT creator OpenAI will reportedly help Buzzfeed produce work like quizzes.

A professor at Princeton researching the impact of artificial intelligence doesn't believe that OpenAI's popular bot ChatGPT is a death knell for industries. 

While such tools are more accessible than ever, and can instantaneously package voluminous information and even produce creative works, they can't be trusted for accurate information, Princeton professor Arvind Narayanan said in an interview with The Markup. 

"It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not," he said. 

Experts who study AI have said that products like ChatGPT, which are part of a category of large language model tools that can respond to human commands and produce creative output, work by simply making predictions about what to say, rather than synthesizing ideas like human brains do. 

Narayanan said this makes ChatGPT more of a "bul***** generator" that presents its response without considering the accuracy of its responses.  

But there are some early indications for how companies will adopt this type of technology. 

For instance, Buzzfeed, which in December reportedly laid off 12% of its workforce, will use OpenAI's technology to help make quizzes, according to the Wall Street Journal. The tech reviews site CNET published AI-generated stories and had to correct them later, The Washington Post reported. 

Narayanan cited the CNET case as an example of the pitfalls of this type of technology. "When you combine that with the fact that the tool doesn't have a good notion of truth, it's a recipe for disaster," he told The Markup. 

He said that a more likely outcome of large language model tools would be industries changing in response to its use, rather than being fully replaced. 

"Even with something as profound as the internet or search engines or smartphones, it's turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution," he told The Markup. "I don't think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a 'sky is falling' kind of issue."

The Markup's full interview with Narayanan is worth reading, which you can do here: Decoding the Hype About AI: https://themarkup.org/hello-world/2023/01/28/decoding-the-hype-about-ai

HERE: https://www.businessinsider.com/princeton-prof-chatgpt-bul*****-generator-impact-workers-not-ai-revolution-2023-1?IR=T

 

Edited by msfntor
Link to comment
Share on other sites

Decoding the Hype About AI

A conversation with Arvind NarayananBy Julia Angwin

January 28, 2023

Hello, friends,

If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh.

The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams. 

Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content). The Atlantic declared that it could “destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “pocket nuclear bomb” and chastised its makers for launching it on an unprepared society.

Even the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “lights out for all of us.”

But others say the hype is overblown. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fictionand should stay there.”

So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “AI snake oil.” In 2019, Narayanan gave a talk at MIT called “How to recognize AI snake oil” that laid out a taxonomy of AI from legitimate to dubious. To his surprise, his obscure academic talk went viral, and his slide deck was downloaded tens of thousands of times; his accompanying tweets were viewed more than two million times. 

Narayanan then teamed up with one of his students, Sayash Kapoor, to expand the AI taxonomy into a book. Last year, the pair released a list of 18 common pitfalls committed by journalists covering AI. (Near the top of the list: illustrating AI articles with cute robot pictures. The reason: anthropomorphizing AI incorrectly implies that it has the potential to act as an agent in the real world.)

Narayanan is also a co-author of a textbook on fairness and machine learning and led the Princeton Web Transparency and Accountability Projectto uncover how companies collect and use personal information. He is a recipient of the White House’s Presidential Early Career Award for Scientists and Engineers.

Our conversation, edited for brevity and clarity, is below.

Caption:Arvind Narayanan

Angwin: You have called ChatGPT a “bul***** generator.” Can you explain what you mean?

Narayanan: Sayash Kapoor and I call it a bul***** generator, as have others as well. We mean this not in a normative sense but in a relatively precise sense. We mean that it is trained to produce plausible text. It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal. 

This actually matches what the philosopher Harry Frankfurt has called bul*****, which is speech that is intended to persuade without regard for the truth. A human bullshitter doesn’t care if what they’re saying is true or not; they have certain ends in mind. As long as they persuade, those ends are met. Effectively, that is what ChatGPT is doing. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.

Angwin: What are you most worried about with ChatGPT?

Narayanan: There are very clear, dangerous cases of misinformation we need to be worried about. For example, people using it as a learning tool and accidentally learning wrong information, or students writing essays using ChatGPT when they’re assigned homework. I learned recently that CNET has been, for several months now, using these generative AI tools to write articles. Even though they claimed that the human editors had rigorously fact-checked them, it turns out that’s not been the case. CNET has been publishing articles written by AI without proper disclosure, as many as 75 articles, and some turned out to have errors that a human writer would most likely not have made. This was not a case of malice, but this is the kind of danger that we should be more worried about where people are turning to it because of the practical constraints they face. When you combine that with the fact that the tool doesn’t have a good notion of truth, it’s a recipe for disaster....

MORE: https://themarkup.org/hello-world/2023/01/28/decoding-the-hype-about-ai

 

Edited by msfntor
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...