By Mahananda Bohidar
Can you tell me a bit about how Google DeepMind’s work with ARMMAN began?
Google Research India, now Google DeepMind, was established in 2019. My role with the initiative began with a focus on AI for social good in India. By chance, through some connections, I got to meet Dr Aparna Hegde, who shared a pressing issue their non-profit, ARMMAN, had been working on: 40 per cent of the mothers were dropping out of their maternity health programme. The key challenge was to reduce this dropout rate with AI’s help. I had just started my role at Google, so it made sense for us to take on this problem and figure out a way.
You mentioned dropouts… How did you train AI to predict who would drop out?
In terms of predicting who was at the risk of dropping out, we had access to past beneficiary data. Each of these mothers had associated features such as age, income, education, language spoken, etc and we also had call records. For instance, whether a mother had been listening to the messages and then transitioned into not listening. So, we had call records of how they behaved in response to these interventions.
Based on all this data, we could build a model. For example, if a mother is 26 years old, studied up to 8th grade, and has an income of ₹5,000, the model might predict — and I’m just making this up here — that if she is listening today, the probability that she continues to listen might be, say, 0.1 per cent. So when a new mother walks in, we can make similar predictions about her behaviour and the risk of dropping out. We can rank them based on that risk and tell ARMMAN, “These are the mothers you should intervene right now, to retain them.”
What was happening to the women who were dropping out? How did you use AI to solve that issue?
It was obviously a loss to the programme that these mothers were dropping out, and they and their child were not benefiting from all the health information. If you compare mothers who listened to all their messages to those who didn’t, there have been significant benefits to the ones who did. To highlight a few, data show that there’s been a 30 per cent increase in infants tripling their birth weight in a year and more follow-up visits to the doctor after the baby’s birth. And these women were potentially missing out on all of that simply because they weren’t listening to the health messages.
So are these messages usually reminders for vaccines, check-ups, the mother’s health, the child’s health?
The messages that I have been exposed to are usually short two-minute snippets. They say things you need to improve your iron intake, eat green leafy vegetables, increase your calcium intake, get the baby vaccinated, and so on. These messages are delivered once a week, and they’re in the local language.
What are the results you’ve seen through the programme so far?
We’re very happy that we’ve cut the dropout rate by 30 per cent which has led to an improvement in the health behaviours of mothers. It’s just really gratifying that this partnership has taken AI systems that are often theoretical in the lab and moved them into real-world deployment, where they’re genuinely benefiting women at the scale of hundreds and thousands.
Were there any surprising insights you guys stumbled into, as part of this process?
One of the good surprises that ARMMAN pointed out to us was that, although the model was built to identify mothers at high risk of dropping out — not those at high risk of complications during pregnancy — it somehow ended up doing both. What ARMMAN told us is that the model is also identifying women who are facing more difficulties during pregnancy, and in some extreme cases, even miscarriages. So they were then able to connect those mothers to counselling services or offer them additional support. That was a very surprising and encouraging outcome, especially since the model wasn’t even designed to detect that. It was great news.
So, could you tell me a bit about how you’re collaborating with Kilkari as well? Is it something new or are you replicating the same model that you did with ARMMAN?
There’s another new model, and again, we’re working with ARMMAN, and ARMMAN in turn is working with Kilkari — a government programme. There’s a big difference in the kind of resources available. For instance, in Mumbai, even though two lakh women were registered at the time, and a call centre there could make about a thousand calls a day, when you’re dealing with millions of mothers registered across India — with all the linguistic and regional diversity — it gets complicated. There isn’t one centralised call centre that can handle all those calls. So instead, they’ve shifted to a different approach: How can we predict the best time to send the health message so that mothers are more likely to listen? So, our work has led to improvements in pickup rates at seven different time slots throughout the day.
In a country like India, maternal health is shaped by factors like literacy, family influence, and socio-economic background. How easy or difficult was it to account for these while problem-solving with AI and tech?
I think all of the diversity that you’ve pointed out is certainly relevant. So, the programme does try to tailor the messages to the language of the person based on their region and context.
What we’ve also noticed is that as listenership improves, there’s also an increase in family involvement. Some of the surveys ask questions like, “Is your husband listening to these messages?” or “Are you sharing this information with your family?” And we’re seeing more positive responses to those questions. So, there’s a positive chain of reaction, if you will, where it’s not just the mother but also the family getting involved.
You’ve long worked at the intersection of tech and communities, even as tech has evolved rapidly over the last couple of decades. What has surprised you about how tech intersects with community now?”
First of all, people now know what AI is, which wasn’t always the case. Back in 2005 or 2006, when we started talking about AI, the reaction was often, “What are you talking about?” It was still a very new and basic concept for many.
I think there’s much more acceptance now that AI is a powerful technology. As a result, we’re finding more agencies accept that they could find this useful, so it’s easier to have those conversations today.
Simultaneously, people have also heard about the potential negative consequences of AI. But generally speaking, it’s much easier to have these conversations now. Now, there’s a shift where people approach me to collaborate as opposed to earlier, when I had to push and say I can be of assistance. So, that’s really helpful in our goal of trying to assist communities through AI.