AI Development Works Better For Everyone When Its Workforce Is Well Looked After


Author: Peter Bloom

(MENAFN- The Conversation) A former CEO and executive chairman of Google, recently suggested that the tech giant's apparent lag in AI development was due to the company prioritising employees' personal wellbeing over progress. Eric Schmidt told an audience :“Google decided that work-life balance and going home early and working from home was more important than winning.”

Schmidt later retracted his statement, claiming he“misspoke”. Yet his comment reflects a common view in the tech industry – that progress is dependent on intensive work patterns and keeping a close eye on staff.

Companies such as Amazon have implemented contoversial worker tracking systems. Others promote a culture of “overworking” as a necessary part of innovation.

But this mindset overlooks the crucial role that an engaged and happy workforce plays in creating beneficial technology. Studies have shown , for example, that remote working and better work-life balance often lead to increased productivity rather than hindering progress.

History also shows that empowering workers and fostering a democratic approach has accelerated technological breakthroughs. The open-source movement, where information is shared in software development, is a case in point. Wikipedia is another example – a success story built entirely on volunteer contributions and collective effort.

In AI too, there has been rapid progress with projects, which emphasise openness and collaboration, such as language models similar to ChatGPT known as BLOOM and GPT-J . This demonstrates that democratising access to AI tools and knowledge can accelerate progress.

Meanwhile, many of the ethical challenges in AI development – from algorithmic bias to privacy concerns – stem from rushed development cycles and a lack of diverse perspectives.

For instance, racial and gender biases in facial recognition systems reportedly emerged because development teams were working under pressure to deliver results quickly. The Cambridge Analytica scandal , which exposed the misuse of Facebook user data, illustrated the risks of prioritising growth and profit over privacy and social impact.

The drive for relentless productivity and market dominance has also led to the emergence of“digital sweatshops” – exploitative labour regimes associated with AI development.

These include content moderation“factories” where workers are exposed to traumatic material for long hours with minimal support (a spokesperson for Facebook's parent company said it takes its responsibility to content reviewers seriously, with“industry-leading pay, benefits and support”.) Or the data processing operations connected to machine learning where workers in low-wage countries perform repetitive tasks for little reward.

Companies such as Facebook, Google and Amazon have been criticised for outsourcing these crucial (but often overlooked) aspects of AI development to contractors with poor working conditions. And they highlight the human cost of rapid AI advancement, where the real motivation is often about corporate dominance and maximising shareholder value.

This model also leads to innovations that fail to address broader social and ecological challenges. The substantial carbon footprint associated with AI development shows the urgent necessity for more considered, sustainable methods.


Socially beneficial tech? Who is Danny/Shutterstock

But these are more likely to emerge from well-treated teams of people, who are granted the autonomy to explore and address the wider implications of their work. They will not come from rigid hierarchies focused solely on immediate financial returns.

Herein lies the false binary between worker power and technological advancement. The evidence suggests that when executives exert too much control, the development of socially beneficial technology is hindered. They simply won't provide what empowered workers and open collaboration can bring to the table.

Socially beneficial intelligence

Worker-led initiatives have also been at the forefront of ethical technology development. For example, Google employees' protest against the company's involvement with Project Maven , a US military AI scheme, was a success. And Amazon workers have continued to push for the company to improve its environmental credentials.

Schmidt spoke of“winning” in the AI race. But what exactly is being won through techniques that prioritise corporate control and worker exploitation? Often the result is unethical technology developed under exploitative conditions – technology which serves narrow corporate interests rather than social needs.

But the future of AI and other emerging technologies should not be driven solely by market forces. Innovation does not require oppressive work conditions or excessive corporate control.

And technological progress and social progress are not mutually exclusive. In fact, they can be mutually strengthening. A truly successful AI industry should be one that produces innovative technologies in a way which empowers workers, deals with ethical considerations, and makes a positive contribution to society.


The Conversation

MENAFN13112024000199003603ID1108883800


The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.