Companies with AI chatbots love to highlight their capability as translators, but they still default to English, both in function and in the information they are trained on. With that in mind, Humain, an AI company in Saudi Arabia, has now launched an Arabic-native chatbot.
The bot, called Humain Chat, runs on the Allam large language model, according to Bloomberg, which the company claims was trained on “one of the largest Arabic datasets ever assembled” and is the “world’s most advanced Arabic-first AI model.” The company says that it is not only fluent in the Arabic language, but also in “Islamic culture, values and heritage.” (If you have religious concerns about using Humain Chat, consult your local Imam.) The chatbot, which will be made available as an app, will first be available only in Saudi Arabia and currently supports bilingual conversations in Arabic and English, supporting dialects including Egyptian and Lebanese. The plan is for the app to roll out across the Middle East and eventually go global, with the goal of serving the nearly 500 million Arabic-speaking people across the world.
Humain took on Allam and the chatbot project after it was started by the Saudi Data and Artificial Intelligence Authority, a government agency and tech regulator. For that reason, Bloomberg raises the possibility that Humain Chat may comply with censorship requests of the Saudi government and restrict the kind of information made available to users.
Which, yes, that seems unquestionably true. Saudi Arabia’s government regularly attempts to restrict the type of content made available to its populace. The country scored a 25 out of 100 on Freedom House’s 2024 “Freedom of the Net” report, attributed to its strict controls over online activity and restrictive speech laws that saw a women’s rights advocate jailed for more than a decade.
But we also should probably start explicitly framing American AI tools this way, too. Within its support documents, OpenAI explicitly states that ChatGPT is “skewed towards Western views.” Hell, you can watch Elon Musk try to fine-tune the ideology of xAI’s Grok in real time as he responds to Twitter users who think the chatbot is too woke—an effort that, at one point, led to Grok referring to itself as “MechaHitler.”
There’s certainly a difference between corporate and government control (though, increasingly, it’s worth asking if there actually is that big of a difference), but earlier this year, the Trump administration set out plans to regulate the kinds of things large language models are allowed to output if the companies that make them want federal contracts. That includes requirements to “reject radical climate dogma” and be free from “ideological biases” like “diversity, equity, and inclusion.” It’s not force, but it is coercion—and given that OpenAI, Anthropic, and Google have all given their chatbots to the government for basically nothing, it seems like they are more than happy to be coerced.
Read the full article here