How AI is used to surveil staff

-

Opaque algorithms meant to investigate employee productivity have been rapidly spreading through our workplaces, as detailed in a brand new must-read piece by Rebecca Ackermann, published Monday in . 

Because the pandemic, a number of corporations have adopted software to investigate keystrokes or detect how much time staff are spending at their computers. The trend is driven by a suspicion that distant staff are less productive, though that’s not broadly supported by economic research. Still, that belief is behind the efforts of Elon Musk, DOGE, and the Office of Personnel Management to roll back distant work for US federal employees. 

The concentrate on distant staff, though, misses one other big a part of the story: algorithmic decision-making in industries where people don’t work from home. Gig staff like ride-share drivers may be kicked off their platforms by an algorithm, with no option to appeal. Productivity systems at Amazon warehouses dictated a pace of labor that Amazon’s internal teams found would result in more injuries, but the corporate implemented them anyway, in accordance with a 2024 congressional report.

Ackermann posits that these algorithmic tools are less about efficiency and more about control, which staff have less and fewer of. There are few laws requiring corporations to supply transparency about what data goes into their productivity models and the way decisions are made. “Advocates say that individual efforts to ward off against or evade electronic monitoring are usually not enough,” she writes. “The technology is just too widespread and the stakes too high.”

Productivity tools don’t just track work, Ackermann writes. They reshape the connection between staff and people in power. Labor groups are pushing back against that shift in power by looking for to make the algorithms that fuel management decisions more transparent. 

The total piece incorporates a lot that surprised me concerning the widening scope of productivity tools and the very limited signifies that staff have to grasp what goes into them. Because the pursuit of efficiency gains political influence within the US, the attitudes and technologies that transformed the private sector may now be extending to the general public sector. Federal staff are already preparing for that shift, in accordance with a brand new story in For some clues as to what that may mean, read Rebecca Ackermann’s full story. 


Now read the remaining of The Algorithm

Deeper Learning

Microsoft announced last week that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to constructing quantum computers that might make them more stable and easier to scale up. 

Why it matters: Quantum computers promise to crunch computations faster than any conventional computer humans could ever construct, which could mean faster discovery of latest drugs and scientific breakthroughs. The issue is that qubits—the unit of data in quantum computing, quite than the everyday s and s—are very, very finicky. Microsoft’s recent sort of qubit is speculated to make fragile quantum states easier to keep up, but scientists outside the project say there’s an extended option to go before the technology will be proved to work as intended. And on top of that, some experts are asking whether rapid advances in applying AI to scientific problems could negate any real need for quantum computers in any respect. Read more from Rachel Courtland. 

Bits and Bytes

X’s AI model appears to have briefly censored unflattering mentions of Trump and Musk

Elon Musk has long alleged that AI models suppress conservative speech. In response, he promised that his company xAI’s AI model, Grok, can be “maximally truth-seeking” (though, as we’ve identified previously, making things up is just what AI does). Over last weekend, users noticed that should you asked Grok about who’s the most important spreader of misinformation, the model reported it was explicitly instructed not to say Donald Trump or Elon Musk. An engineering lead at xAI said an unnamed worker had made this modification, nevertheless it’s now been reversed. (TechCrunch)

Figure demoed humanoid robots that may work together to place your groceries away

Humanoid robots aren’t typically excellent at working with each other. However the robotics company Figure showed off two humanoids helping one another put groceries away, one other sign that general AI models for robotics are helping them learn faster than ever before. Nevertheless, we’ve written about how videos featuring humanoid robots will be misleading, so take these developments with a grain of salt. (The Robot Report)

OpenAI is shifting its allegiance from Microsoft to Softbank

In calls with its investors, OpenAI has signaled that it’s weakening its ties to Microsoft—its largest investor—and partnering more closely with Softbank. The latter is now working on the Stargate project, a $500 billion effort to construct data centers that may support the majority of the computing power needed for OpenAI’s ambitious AI plans. (The Information)

Humane is shutting down the AI Pin and selling its remnants to HP

One big debate in AI is whether or not the technology would require its own piece of hardware. Relatively than simply conversing with AI on our phones, will we’d like some kind of dedicated device to talk over with? Humane got investments from Sam Altman and others to construct just that, in the shape of a badge worn in your chest. But after poor reviews and sluggish sales, last week the corporate announced it will shut down. (The Verge)

Schools are replacing counselors with chatbots

School districts, coping with a shortage of counselors, are rolling out AI-powered “well-being companions” for college kids to text with. But experts have identified the dangers of counting on these tools and say the businesses that make them often misrepresent their capabilities and effectiveness. (The Wall Street Journal)

What dismantling America’s leadership in scientific research will mean

Federal staff spoke to concerning the efforts by DOGE and others to slash funding for scientific research. They are saying it may lead to long-lasting, perhaps irreparable damage to the whole lot from the standard of health care to the general public’s access to next-generation consumer technologies. (MIT Technology Review)

Your most vital customer could also be AI

Individuals are relying increasingly on AI models like ChatGPT for recommendations, which implies brands are realizing they need to work out the way to rank higher, much as they do with traditional search results. Doing so is a challenge, since AI model makers offer few insights into how they form recommendations. (MIT Technology Review)

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x