Business

Ethical AI: Are organizations responsible enough with AI or is their investment reckless?

Why do organizations leveraging AI need to be regulated? They’re serving their customers much efficiently while cutting costs right?

According to an ESG research report by Qlik, there is widespread adoption of AI technologies among organizations, an overwhelming 97% of surveyed organizations actively engaging with AI, with a significant portion (74%) already incorporating generative AI technologies in production. This marks a notable shift towards AI-driven operations across sectors.

Read more: Organizations gear up with AI as AI FOMO sets in but many still hesitate

However, while there is active investment in AI, there’s a notable discrepancy in strategic planning with 74% of organizations admitting they still lack a comprehensive, organization-wide approach to responsible AI.

Challenges in ethical AI practices include ensuring transparency and explainability in AI systems, pointing to a critical need for solutions that demystify AI processes. Nearly all organizations (99%) face hurdles in staying compliant with AI regulations and standards, underscoring the complex regulatory landscape surrounding AI technologies.

While there is a growing recognition of AI regulation, over a quarter of organizations have encountered increased operational costs, regulatory scrutiny, and market delays due to inadequate responsible AI measures.

As organizations accelerate their AI initiatives, the necessity for a solid foundation that supports ethical guidelines and robust data governance becomes crucial.” — Michael Leone, Principal Analyst at ESG

Michael Leone, Principal Analyst at ESG, talks about a gap in effectively implementing responsible AI practices. “As organizations accelerate their AI initiatives, the necessity for a solid foundation that supports ethical guidelines and robust data governance becomes crucial.”

Drew Bagley, VP and Counsel, Privacy and Cyber Policy, CrowdStrike, says, “AI has long been transformative for modern technology, but recent developments have lowered the barrier of entry to innovators and adversaries alike. This is especially true in cybersecurity where defenders must rely upon AI to detect and prevent cyberattacks at scale in an era dominated by malware-free attacks and zero-day exploits.”

He points out that adversaries continue expressing interest in leveraging large language models (LLMs) to move more quickly and scale their operations.

“Ultimately, it’s critical that AI can and should be leveraged in a responsible way.” — Drew Bagley, VP and Counsel, Privacy and Cyber Policy, CrowdStrike

“Ultimately, it’s critical that AI can and should be leveraged in a responsible way. The natural language interface of today’s LLMs has the potential to make cybersecurity roles and responsibilities more broadly accessible, helping to close the cybersecurity skills gap and improve response time so defenders can stay ahead of adversaries – boosting proactive security across organizations and agencies. This is why investing in responsible AI innovation is more critical than ever.” 

Company Efforts

AI companies are showing responsibility in many cases. In January, Qlik established an inaugural AI Council, convening a distinguished set of advisors, to accelerate the responsible development of its AI-driven product portfolio.

Qlik’s Generative AI Benchmark Report found that 31% of senior executives plan to spend over US$10 M on Gen AI initiatives in the coming year and 79% have already invested in Gen AI tools or projects. However, if the data building blocks of AI are not governed properly as it is democratized across the entire workforce, it could present a serious threat to the efficiency and integrity of business operations.

We’ve reached an inflection point where innovations like generative AI are impacting the world as the internet did. This is not the time for complacency.” — Rumman Chowdhury, member of Qlik’s AI Council

Rumman Chowdhury, member of Qlik’s AI Council, said, “We’ve reached an inflection point where innovations like generative AI are impacting the world as the internet did. This is not the time for complacency. ‘Adopting AI’ is not as simple as some suggest, but getting left behind is a risky game. By taking responsible steps, organizations can enter an era of unprecedented innovation – I look forward to being able to contribute to this evolution.”

AI is a shiny addition to tools that companies can take advantage of now, and they are doing so with gusto, but it’s important that they’re also doing so with responsibility.

Read more: The wise thing to do is work in tandem with AI regulation by keeping the human element relevant

Navanwita Bora Sachdev

Navanwita is the editor of The Tech Panda who also frequently publishes stories in news outlets such as The Indian Express, Entrepreneur India, and The Business Standard

Recent Posts

Tesla’s robotaxi forces us to wonder about consumer perception of Full Self Driving

After months, actually years of waiting, Tesla finally unveiled its robotaxi last October. Much analysis…

2 hours ago

How emerging tech is helping the Indian farmer

The Tech Panda takes a look at how emerging technology is helping the farmer. Farm…

3 hours ago

Indic language adoption spurs Internet users in India to cross 900 M

The internet user base in India is set to surpass 900 million by 2025, driven…

5 days ago

Google signs one of the largest industrial Biochar CDR offtake agreements in India

Varaha, an Indian company developing carbon removal projects in Asia, has sold 100,000 carbon dioxide…

5 days ago

Google’s Willow: The quantum leap we’ve been waiting for

Ever wondered what happens when quantum computing takes a giant leap forward? Google’s latest quantum…

6 days ago

The wise thing to do is work in tandem with AI regulation by keeping the human element relevant

Does AI need to be reined in? Will putting regulations on AI curb the progress…

1 week ago