Yagub Rahimov Chats About the Vexing Challenge of ‘Transparency and AI’

it strategies

By Holt Hackney

Its right there in Yagub Rahimov’s X bio – “obsessed with transparency and AI.”

It’s not a new obsession. It’s just that his passion about “transparency and AI” has become more and more relevant as AI’s adoption in the technology community has escalated.

Rahimov even built a company that is predicated on his obsession – Polygraf, which bills itself as an AI-age Data Leak Prevention (DLP) solution provider. There are many other examples in which Rahimov has demonstrated an uncanny knack for being at the right place at the right time with regard to technology.

Thus, we wanted to get his informed take on AI.

What follows is our exclusive interview.

Question: How would you characterize the adoption of AI technology in recent years? Irresponsible? Good? Measured?

Answer: Recent AI adoption is rapid, ambitious, and surprising to many, almost like magical. But those who have been working behind the scenes for years have been expecting this kind of advancement. It is corresponding with our human evolution; it starts slow and progresses rapidly. However, this evolution has not been the same everywhere.

It is like the manifestation of our collective human aspiration to go break through our natural limitations and reinvent our human capabilities. However, this also appear as if we are playing the creator role without the necessary ethical and existential contemplation. Some of us are playing with fire, it just hasn’t burnt us yet.

We are at a crossroads where AI developments are advancing faster than we fully comprehend its implications. Builders are determined, let’s not forget there is a race for supremacy, to build an autonomous future that is likely to shape our destiny without sufficient scrutiny around it yet. I would not quite claim this as entirely irresponsible, but rather naivety, there is a lot of underestimations of the profound impact AI has on the fabric of our society, privacy, and the essence of what it means to be human.

There is no good or bad technology, there are good or bad technology users, unfortunately we have more malice intent out there than good. Builders out there like us are playing with the threads of innovation, hope, and unintended consequences. We all need to have a more thoughtful approach that balances innovation with ethical responsibility and human-centric values.

The good news is that we have multiple organization like American Security Foundation, Polygraf AI, C2PA, ALEC and others advocating for responsible AI practices.

Q: What can we learn from the US election season that demonstrates the challenge of embracing AI technology?

A: My team have been working with various thinktanks and other organizations for the past six months closely monitoring the recent elections, and I can just say this election was wild. We could say this was the first AI-embraced election we have ever had. We saw how AI technologies, particularly in the realms of social media algorithms and deepfake generation, can influence public perception and undermine democratic processes. And the orchestraters used AI on both sides.

An important observation I had was the massive social-media-based misinformation that continued up until 10pm PT and it abruptly stopped. Who was behind it?

I believe this situation underscores the epistemological crisis of the modern times—we had abundance of information that created more noise than answers, everyone struggled to distinguish truth from falsehood. Those who mastered gen-AI managed to create hyper-realistic but false content, blurring the lines between reality and fabrication.

We need to have systems that promote transparency and veracity. We also need to understand how important it is to have critical rationality as it is no longer enough to say I know something because I have seen it or I have read it, provenance is a must. The very foundations of trust and truth upon which democratic societies are built will otherwise erode fast!

We are working with C2PA and other organizations to fight against deepfake driven malicious, misinformation and propaganda content.

Q: What are other challenges you see with AI that must be addressed?

A: I have interviewed over 700 CIOs, countless engineers, legislators, educators, researchers and more trying to understand the most important AI challenges we face. Let’s divide these challenges into two:

  • Technical challenges
  • Philosophical challenges

From technical perspective, we could say that the following are the key troublesome AI challenges:

  • AI has major privacy issues – AI is a data ledger and unfortunately an average user does not know how to use it safely yet. At least 12% of all prompts that go to AI chatbots involve private and confidential information.
  • AI is hallucinational – unfortunately we treat AI as if it is magic, it is not. Just because an answer looks great does not mean it is accurate.
  • AI is biased – builders bias can be seen with many foundational models and these bias could be dangerous. However, sometimes we need bias, as an example if I am lawyer, I want my AI assistant to be biased towards my expertise, or even think like me, since this would be my competitive advantage. Personal bias should be a feature not a bug.Yagub Rahimov 1 1

From a philosophical perspective, we have much to do as well:

  • Ethical Autonomy – we should not let AI systems take decisions that have moral implications. Who will be responsible if and when AI makes harmful decisions?
  • Existential Risk – can we imagine what and how will superintelligence be? Would it favor organic life or synthetic life?
  • Human Identity – If machines perform better than us, how are we going to live a meaningful life beyond economic productivity?
  • Social Inequality – How will we handle the social division caused by access to AI? Can we consider those who have early AI access to be privileged?
  • AI in warfare – We see more and more AI-assisted modern warfare in hotspots throughout the world. Which international agency should be responsible for international AI treaties for warfare?

Other challenges also include:

  • AI energy impact – training AI solutions we spend significant capital and energy, some of which are not sustainable.
  • Education – AI can help us reduce the knowledge gap or help us understand different cultures, religions and other hidden secrets of the world. However there is accessibility challenges.

Technically and philosophically, addressing these challenges require us have cross collaboration between technical, legal, operational, or even from religious perspectives.

Q: What are countries or unions (in the case of the EU) doing the most to ensure the responsible adoption of AI and why?

A: Most states and countries have AI task force, some better and some just bureaucratic. In the US, the most compelling responsible AI bill I’ve seen came from the state of Utah. They started working on it almost two years ago. Jefferson Moss and his team worked diligently using the technology, trying to understand the technical as well as ethical challenges before working on the bill.

Additionally, there are various thinktanks throughout the nation such as the Abundance Institute working on AI related policies brining technology as well as policy leaders together.  In Texas, we recently got the new AI bill, above the surface it looks great but it is quite similar to EU AI Act as well, if adopted I don’t think that would put the state ahead but set us backward from innovation perspective.

As an example, EU opted in for overregulation with the EU AI Act, quite proudly calling it the first ever comprehensive AI Policy. While I think AI governance and regulation is mandatory, I also think over-regulation is bad for innovation. As a result, many AI innovators have left the EU to the US, Asia, and Middle East to escape restrictions.

On the other hand, other countries like China, Russia, Iran have different AI policies, often they adopt a “develop at whatever cost” strategy, ignoring ethics completely.

Polygraf AI team has been working on AI governance policies that most states are focused on. However, we use technology to handle technology problems not punishments.

Q: What advice would you give IT architects and CIOs as they explore the transformative power of AI?

A: I would advise the same list that we have as a part of the Polygraf AI team mission:

  • Security and privacy at heart – respect people’s privacy, make sure that you build privacy safeguards into your systems from the ground up. Use solutions like Polygraf AI Guard to prevent data leaks.
  • Provide transparency and explainability – try to embed as much transparency and explainability into your models that users and question and scrutinize the decisions your engines have made. This will build trust and rapport.
  • Address Bias and turn them into features – implement process that identify builder bias and eliminate them, instead give your user the ability to embed their personal bias so that you can turn AI bias to a feature instead of a bug.
  • Engage with your policymakers – Not only make sure that you are aware of all the latest AI regulations and bills, but also keep in touch with your local policymaker to share your opinion about what needs to be done.
  • Collaborate – the future is collaborative; you need to engage with other organizations and groups to leverage joint expertise and technological advancements.
  • Invest in your employees – equip your employees with the necessary skills and knowledge from ethical and responsible AI perspective.

At the end we have a goal, our goal is to enrich human life, promote fairness and ethics. If we protect our ethical values, and keep on innovating we will achieve our goals.