Feed aggregator
LG has announced a new premium gaming monitor brand called UltraGear, and the lineup's headline feature is what the company claims is the world's first 5K AI upscaling technology -- an on-device solution that analyzes and enhances content in real time before it reaches the panel, theoretically letting gamers enjoy 5K-class clarity without needing to upgrade their GPUs.
The initial UltraGear evo roster includes three monitors. The 39-inch GX9 is a 5K2K OLED ultrawide that can run at 165Hz at full resolution or 330Hz at WFHD, and features a 0.03ms response time. The 27-inch GM9 is a 5K MiniLED display that LG says dramatically reduces the blooming artifacts common to MiniLED panels through 2,304 local dimming zones and "Zero Optical Distance" engineering.
The 52-inch G9 is billed as the world's largest 5K2K gaming monitor and runs at 240Hz. The AI upscaling, scene optimization, and AI sound features are available only on the 39-inch OLED and 27-inch MiniLED models. All three will be showcased at CES 2026. No word on pricing or when the sets will hit the market.
Read more of this story at Slashdot.
The world's largest accounting body is to stop students being allowed to take exams remotely to crack down on a rise in cheating on tests that underpin professional qualifications. From a report: The Association of Chartered Certified Accountants (ACCA), which has almost 260,000 members, has said that from March it will stop allowing students to take online exams in all but exceptional circumstances. "We're seeing the sophistication of [cheating] systems outpacing what can be put in, [in] terms of safeguards," Helen Brand, the chief executive of the ACCA, said in an interview with the Financial Times.
Remote testing was introduced during the Covid pandemic to allow students to continue to be able to qualify at a time when lockdowns prevented in-person exam assessment. In 2022, the Financial Reporting Council (FRC), the UK's accounting and auditing industry regulator, said that cheating in professional exams was a "live" issue at Britain's biggest companies. A number of multimillion-dollar fines have been issued to large auditing and accounting companies around the world over cheating scandals in tests.
Read more of this story at Slashdot.
Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?
With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad... — Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot. — Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.
— Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.
— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.
What did I miss? What "AI fails" will you remember most about 2025?
Share your own thoughts and observations in the comments.
What's the stupidest use of AI you saw In 2025?
Read more of this story at Slashdot.
About 60 workers in Halifax, Nova Scotia have formed Ubisoft's first union in North America, reports the CBC (though its 17,000 employees include some unionized workforces in other parts of the world):
T.J. Gillis, a senior server developer at Ubisoft Halifax, says he became increasingly concerned about the growth of artificial intelligence in the industry and after the closure of a Microsoft gaming studio in Halifax, Alpha Dog, in 2024. "We're seeing a ton of studios, especially larger studios, just letting people go with no unions or support, people were just being left to fend for themselves. Often times having to leave industry," said Gillis.
Gillis said he got into contact with CWA Canada to begin efforts to build a union with other colleagues... The union was formed six months after filing union certification and after 74 per cent of staff at Ubisoft Halifax voted to join CWA Canada... A spokesperson for Ubisoft said in a statement to CBC News that they "acknowledge the decision issued by the Nova Scotia Labour Board and reaffirm our commitment to maintaining full cooperation with the Board and union representatives."
Carmel Smyth is the president of CWA Canada and says she is already hearing from other employees at tech companies who want to follow Ubisoft Halifax's lead.
Read more of this story at Slashdot.
Engadget reports on "a widespread breach" of Ubisoft's game Rainbow Six Siege "that left various players with billions of in-game credits, ultra-rare skins of weapons, and banned accounts."
Ubisoft took the game's servers offline early Saturday morning, and as of Sunday night its status page still shows "unplanned outage" on all servers across PC, PlayStation and Xbox:
Ubisoft later clarified Saturday afternoon on X that nobody would be banned if they spent their ill-gotten credits, but that a rollback of all transactions starting from Saturday, 6AM ET would soon be underway.
Founded 39 years ago, France-based Ubisoft produces top videogame franchises like Assassin's Creed, with billions in revenue and over 17,097 employees worldwide.
Read more of this story at Slashdot.
As we leave one year and get ready to enter the next, here’s a good quote from Between Two Kingdoms: A Memoir of a Life Interrupted by Suleika Jaouad (recommended; a very moving memoir).
He has a theory: When we travel, we actually take three trips. There’s the first trip of preparation and anticipation, packing and daydreaming. There’s the trip you’re actually on. And then, there’s the trip you remember. “The key is to try to keep all three as separate as possible,” he says. “The key is to be present wherever you are right now.” This advice, more than any, stays with me.
Just having a trip booked is enough to make you happier, according to the Institute for Applied Positive Research:
– 97% of survey respondents report that having a trip planned makes them happier.
– 82% say a booked trip makes them “moderately” or “significantly” happier.
– 71% reported feeling greater levels of energy knowing they had a trip planned in the
next six months.
This time of year seems to be a mix of when we are most likely experiencing all three of these effects – remembering past trips with kids no longer young and folks no longer with us, spending time with close ones this very moment, and starting to plan that next trip in 2026. A good reminder to savor it all.
One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion."
The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..."
Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them...
While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people...
Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said.
An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."
Read more of this story at Slashdot.
"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village.
"IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...."
Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."
Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.)
Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement."
The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."]
Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.
The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses."
The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.
Read more of this story at Slashdot.
|