This episode focuses on AI's evolving role in housing, discussing automation's potential to revolutionise customer service, impact in the workplace, ethical concerns and addressing the regulatory challenges of AI deployment. We dig into these topics and how AI can be used with guests from the Housing sector and those experienced in deploying AI powered tools.
We are joined by Kate Stetsiuk, Head of AI at The Dot Collective, who will bring insights on AI's real world applications and the latest developments in responsible AI use.
Jon Cocker, Chief Information Officer at Platform Housing Group, is here to discuss AI's impact on data, governance, and operational efficiency and Gareth Lloyd, Chief Information and Transformation Officer at Stonewater, shares his perspective on AI in housing, regulatory challenges, and what the future holds.
Could AI replace employees? Can AI be trusted? What are we using it for? We navigate all this in episode 4.
AI (as Paula Palmer)
Hello and welcome to this episode of Stonewater's On The Air podcast. Today, we're exploring one of the most talked about innovations in the sector, artificial intelligence. AI is already transforming industries and housing is no exception. From customer service chatbots to predictive maintenance, AI has the potential to transform the way we manage homes and support residents.
With great potential comes great responsibility. How do we ensure AI enhances rather than replaces human roles? How do we balance innovation with ethics? How can we safeguard data security while embracing AI's benefits?
Paula Palmer
Wait, I'm going to stop that right there, I can't listen to anymore. I don't know if anybody else noticed, but that intro was actually given by an AI version of me, your host Paula. It was created using Synthesia, an AI tool that we're using at Stonewater to create our content and our first foray into all the uses of AI.
We'll come back to that later. First, let's introduce our guests with a bit more enthusiasm than AI Paula has to offer. We are very lucky to be joined by Kate Stetsiuk, who is Head of AI at The Dot Collective, who will bring insights on AI's real-world applications and the latest developments in responsible AI use.
We've also got Jon Cocker, who is Chief Information Officer at Platform Housing Group, who's here to discuss AI's impact on data, governance, and operational efficiency.
And our very own Chief Information and Transformation Officer, Gareth Lloyd, sharing his perspective on AI in housing, regulatory changes, and what the future holds.
I also hope that they'll also share a little bit about how they're using AI in their practises in workplaces. Welcome, everyone, and thanks for joining us.
Jon Cocker
Good morning.
Kate Stetsiuk
Good morning.
Gareth Lloyd
Morning.
Paula Palmer
Fantastic. Great to have you all with me. Thank you very much. So my first question is for Jon. Welcome to On the Air. AI is often seen as a tool for automation in admin and finance roles. Do you also see it reshaping housing roles and do you think it will replace them or enhance them?
Jon Cocker
It's a really interesting question. I think it's certainly going to change the world of work, not just in housing but in across all sectors. Do I see it replacing roles? I see enhancing roles, really. I think housing currently is bogged down in a lot of admin, a lot of back office tasks that perhaps can be automated.
The fact is that housing is a sector that's about managing relationships with our customers, and those customers are the most vulnerable people in society. I can see how you use AI as opening up space for housing officers, for people who are dealing with our customers, to be able to manage that relationship in a lot more effective way and actually deal with the qualitative things that are impacting our customers, rather than the minutia that perhaps AI can do.
There're quite a few examples of that. The really good one I saw quite recently, which are really quite liked, which was quite scary. There's a particular HA in London, and they're using a large language model to define their complaint letters before they're sent to customers. Really interestingly, the feedback they got from customers was the AI-generated complaint response letters were more customer friendly than the human versions that had been written. That was a really interesting case study, I thought. Obviously, they have a human check at the end of that to make sure that the letter has got the right content.
Paula Palmer
Fantastic. When we were researching this episode, all three of you spoke about AI being more of an enhancement rather than a replacement, which is reassuring for all of us with our jobs. Interesting you say about that complaints model, that must be like a really fine-tuned system that they're using there.
Jon Cocker
Well, yeah, I think with all the AI, you need to make sure that the training is how you need it to be. It's a model. I'm sure Kate will come in with a lot more expertise on that. A lot of these models are supervised models at the moment, and they need that level of training. They need that human interaction to make sure they're saying, "Yeah, this is the response I want," or "No, this isn't the response I want." It does take a while to be able to get that outcome that I think that it's really relevant and really needed.
Paula Palmer
That's an interesting point you make about training the model. I didn't know that was a thing. Kate, can you expand on how AI can empower staff, particularly in those customer service and content creation roles rather than replacing them?
Kate Stetsiuk
Yeah, absolutely. I also agree that AI tool is like the real thing that enables our team to work more efficiently than just replacement for them, because we can give AI to do many repetitive or time-consuming tasks and freeing up people to do work that's really more meaningful, creative, impactful. I think that we all should think about AI as a powerful tool with many capabilities with classifying, predicting, analysing, summarising, and many more, and learn how to map these capabilities to our repetitive tasks in our business workflows. It's how we really can optimise this efficiency and improve outcomes.
For example, in customer service, I see many AI applications because we have many routine inquiries like checking balance, confirming rent payments, and many more. All of these can be easily handled by AI-powered chatbots and allowing human staff to focus on more complex or sensitive interactions where we really need empathy and creative problem-solving that really matter. It's also similar to content creation because AI can help us generate ideas, outline articles, or even produce first drafts.
The human touch is important because human shaping the narrative, adding personal insights, and ensuring our message that resonates with audience. It's crucial. I also completely believe that rather than replacing anyone, AI is definitely empowering people.
Paula Palmer
Yeah, fantastic. I think it's going to have a real good place in research and stuff like that. A computer can much more quickly analyse lots of information and draw out the key points for people to be able to make those decisions. You said there about enhancing, so taking out the repetitive tasks, so that for us, when we've got so many calls coming in, it will free up to be able to help those people who need more than the standardised service. Those people with vulnerabilities, we call them. Those who need more than your basic script on a call. That's fantastic.
Gareth, let's bring you in. From your perspective, what are the biggest risks of AI when it comes to jobs in housing, and how do we ensure it's used responsibly rather than disruptively?
Gareth Lloyd
I'll pick up on the word disruptively there, because I don't think disruptive innovation is necessarily a bad thing. It just means it's more transformative. If you go back to the original definition from the early '90s of disruptive innovation, it talks about faster, cheaper, simpler solutions. I really think that's how we should be looking at the opportunity here, how can we really speed up the way that we operate, so we get back to people faster, we get repairs done faster, we solve problems faster and cheaper for people than we're able to at the moment. I think if we look at it in those terms, there's a really massive opportunity for us in housing, using AI to drive business model transformation.
I think there will be lots of jobs that are impacted by this, but don't underestimate the human capacity for evolution, reinvention. If you go back to the wheel and machinery in mills and the internet and whatever, we've coped and evolved and improved our societies using those technologies. I think AI is very much the same. My message would be very much embrace it, engage with it.
At Stonewater, we've got over 50 AI applications which are in use by our staff, some sanctioned, some unsanctioned. But I think that's a really positive thing to see that level of engagement with the new technologies. Definitely, we should never underestimate human intelligence. There's still a need for it.
Paula Palmer
Fantastic. Thanks. And phew!… Jon, we've already started to mention how AI can be incredibly powerful tool in customer service. Going from Gareth's point there, it raises concerns about trust in these systems and making sure it's given us fact and using them carefully, still having that human input. How do we ensure AI-driven decision-making is transparent, and we're explaining that to customers?
Jon Cocker
Yeah. There're two words which aren't mine words, but the two to have in your mind is that transparency and the explainability. I'll pick up on the transparency first. It's being very open with the customers where you are using AI, whether that's for decision-making or for analysis, and making sure you're updating your privacy policy on your website. You've made it very clear about what you're doing.
In Platform, we did an ethical framework. This was specifically designed for our IoT work, but actually, we've adapted that ethical framework for AI as well. The key component of that is that going out to customers and saying proactively in very simple language, "This is what we're going to be doing with AI, and that's how it's going to impact your data."
That leads me on to the next thing, which is the explainability. A large amount of artificial intelligence, especially when you get it from a third-party supplier, they are a horrible term, but a black box solution. Essentially you put some data in the system, does a load of magic, and then it kicks something out. Actually, you as a user of that system, you're not privy to what that black box calculation looks like. You're not privy to it to understand what it's actually doing.
That can cause a problem sometimes because if you are creating decisions based on that, you need to be able to explain to the customer how it reached that decision. And if you've got a black box solution that makes it very difficult to do so, I think there's an opportunity there for us to work with suppliers to be able to get that explainability to customers, so we can be really clear about it. Interestingly, you may have noticed that the latest version of ChatGPT now has an explainability model in it now. You click on your query, and it actually tells you how it reached that conclusion to incorporate that explainability into it.
The other key component of that trust is having clear governance for artificial intelligence in your organisation. There're a couple of ways that you can do that and that we have an AI policy for your organisation, I think is absolutely critical. Understanding what people are using it for and how to use it.
As Gareth mentioned earlier, in development when you're building houses, but you have what are called desire paths. You lay some grass, and then you lay a path. Actually, people will walk across the bit of grass to get to the shortest point. It's the same with technology. There're so many ubiquitous tools out there for artificial intelligence, and people can access that. Whether you ban it or not, in your organisation, people are going to use it. You've got to embrace that opportunity and allow people to use the technology, but make sure you've got those very clear guidelines and guardrails to say this is how you use it responsibly, so we're not putting our customers' data at risk, or we're not impacting their privacy.
Then the next bit is around board assurance and assurance generally. I'm a big advocate of third party assurance for things. We're working to do the gap analysis on ISO 42001, which is a very new AI governance standard that has external audits on that. I'm really passionate because we look after the most vulnerable people in society, that we are the bastions of that ethical and policy guideline. I think that's really important for us.
Paula Palmer
Fantastic. Thanks, Jon. We're going to come back to the regulation, governance and education piece in a minute. I want to just touch back there, Kate. You've worked with lots of AI customer support systems that automate up to 70% of queries. Can you tell us a bit more about that, and what are the risks, opportunities in using that in customer interactions?
Kate Stetsiuk
Yeah, that's a great question because it has many opportunities and many risks. In my experience, one of the biggest opportunities AI offers in customer service is efficiency, because we really can handle huge amounts of routeing queries in seconds. People can get quick answers even outside of normal business hours. Also, we can process huge amounts of different documents, automatically build reportings, summarising things, et cetera.
Also, that's where the potential risks come in. Because if we rely too heavily on automation, we risk lose that personal connections with, for example, residents. We have to make sure that our AI system really can seamlessly hand off conversation to real people when someone's concerns is more nuances. It's really possible because we can set up a system to check and classify queries if we need to connect human to these issues or no, so, and it's possible to balance in it.
Another risk is also misinformation, because AI can sometimes get answers wrong or make assumptions it shouldn't. We need to set up a regular monitoring and careful updates to our system, to our AI's knowledge base, and we can mitigate these risks and handle it.
I think it's all about balance because AI really can handle high volume, straightforward tasks, leaving out stuff free to take more human-focused things. If you do it right, AI just becomes our partner, [inaudible 00:17:12], and replacement and deliver us both efficiency and personal touch. I really believe in balancing it.
Paula Palmer
That's great. I think we've all seen that when you get stuck in an endless loop with a chatbot that keeps giving you the same answers, and you're like, "I know, but that's not my problem." I think that's… Yeah, we need to remember that it needs to be a human touch in there.
Kate Stetsiuk
Yeah. I would add that I think is it the best thing for this customer support things is to have additional AI classifiers that will recognise. Do we need to connect human now or no? It will be the best because if it's just standard queries, simple questions from our database, AI can easily handle it. And residents for example, don't need a real human connection here.
Paula Palmer
Yeah. You made a good point there about it extends a business's hours, doesn't it? If you're just ringing up to get your rent statement or check if you can have permission for a standard, then that's really useful, isn't it? Everybody who's got, I say a 9-5 job. It could be any hour's job, but if it's not when somebody's at the end of a phone, then it's really useful.
Gareth, from a data security perspective, AI tools like ChatGPT, which you mentioned earlier, and Gemini, they're becoming more widely used. They also pose a confidentiality risk. What do we need to do to protect sensitive tenant data?
Gareth Lloyd
Well, there're a couple of aspects to that. The first is that the whole cybersecurity landscape is being been really transformed by AI. It's so much cheaper and easier for really high quality phishing and other types of attacks to be executed now. That's a quantum change for us and something that always you need to just keep up to date, keep everybody trained and aware. That's really important.
The data privacy side of this is probably the most interesting challenge. When we have staff, for example, using ChatGPT and posting maybe a work report into their translating it into a different language on a tool, or asking it to produce a summary, or whatever it may be, that's a massive productivity benefit for us as an organisation. People may not be appreciating that. There's an old adage online, if you're not paying for it, you're the product, in this case, your data, the company's data is the product. All of those data that are being loaded into these models are becoming effectively the property of that provider.
There's a bigger thing that I don't think people quite get when we talk about training these models. These models have all been trained on everybody's data that was online over the past decades. If you think about it, quite a lot of that is copyright data that's just been appropriated. What we're all now doing, we're all using these tools so intensively, is we're extending that copyright confusion that there maybe was and the appropriation of those data by these very large tech companies.
I think it's really incumbent on us to not stop and block, and prevent people using AI tools, but to make sure they're making informed decisions with proper policy guidelines. I really don't want to discourage people from innovating with new technology. I want them to use it safely. For me, it's that level of understanding about what we're putting in, how you anonymise data before you put it into one of these tools, and also the difference between the paid corporate tools that we have and some of the free tools that are out there. I think really there's a massive job for us to do on the data privacy side of this, both in terms of protecting commercial data and tenant data.
For example, we've used we've used the example of responding to a customer complaint. If we were to do that using a free AI tool online and to post that email thread or whatever it is into that tool, we're making those data available to whichever tech company is using that tool. It's just incumbent on us to do as much training and education and policy building as we need to, to make sure people make informed decisions.
Paula Palmer
That's actually really fascinating. I don't think I appreciated that the price you pay to use those free tools is your information and stuff like that. What you're putting in is training those systems and is there for other people. Definitely a good piece about making sure that there's an education and awareness. Let's now talk about AI and IoT, the Internet of Things, and how it's being used in practise. Jon, I think Platform has been using AI for predictive maintenance. This is AI powered smart sensors in homes that can improve energy efficiency and prevent issues like damp and mould. Where do we draw the line between innovation and respecting tenant privacy?
Jon Cocker
Once you start putting technology into people's homes, then you're crossing a threshold there, and it's the law of unintended consequences. It's very much you can put an IoT device in, and a lot of us, probably across the sector, have got smart thermostats that can measure humidity, they can measure when the boiler's turned on, temperature, and all that type of thing, which is all really useful information. But what also that information and the metadata behind that tells you when that customer's home, it probably tells you how many people are in that property as well, if you start analysing that data.
It's a very much a case of, because we can get that data, you have to have those very clear ethical guidelines about what you're going to use that data before. I mentioned that ethical framework earlier on, and that's very much part of when you are deploying a project such as IoT, that you have that very clear transparency with the customers that, "We will collect this data, but this is the data we are going to use. We aren't going to use this other data. We're not going to use it for nefarious purposes. We're only going to use it for these purposes that you agree to." Again, it's that transparency piece of work there where you have to be very, very clear about what you are going to use it for.
The other thing to say is AI is a tool in the technical toolbox. It's not something that's going to replace everything. You've got all traditional data science, you've got other methods of it. Part of that ethical discussion is, just because we can use AI, should we? Is it the best tool for that particular job? Again, that's one of the considerations that you should be doing when you're deploying this type of technology across your customers' homes, because there's a very clear privacy threat there for them, and obviously that customer trust is such an important aspect for us. If they feel that you are spying on them as a landlord, that trust will be eroded immediately, and that's not where anybody wants to be in the sector, I'm sure.
Paula Palmer
No, not at all. It's an interesting point, isn't it? You're going to have to wield our power a bit carefully. Then surely there's data protection points in there, and I think that's one of the AI Bill of Rights pillars or whatever you want to call it.
Jon Cocker
Absolutely. Sorry, just before you continue, I think one of the one of the key uses of AI in that environment, as well, is actually creating actionable outcomes. We're probably all work with data coming up from IoT, and you get tonnes of data, but actually, what does that mean? I think the power of AI on the end of that is to be able to interpret that and actually give a clear action, so that might be a job to an engineer, or it might be a task for the housing officers as well. I see that being a real benefit in the future of being able to interpret all this huge amount of data that's been coming from automated sensors and distil that down into, "That boiler's not working to quite efficiency, go and do an engineering job." I think that's going to be a real benefit going forward.
Paula Palmer
Yeah, I think so, and sounds like it's going to be really important, and help with resources and financing and make systems more proactive, isn't it? Kate, do you think housing providers are striking the right balance between innovation and ethics when it comes to AI and data collection?
Kate Stetsiuk
I think that today, especially today, it's not enough just to roll out an AI system and just hope for the best. We really need to define how and why we are collecting data, who has access to it, and what limits are in place to protect residents' privacy. That's where having a solid ethical framework is crucial, because it keeps everyone accountable and ensures that AI is used responsibly. To prevent data and ethics risk and misuse, things like clear policies, regular audits and transparent reporting are a must-have.
I think it's definitely possible to strike the right balance between innovations and ethics, but you need to follow established guidelines and really work with professional teams, because, ethics, audits, clear policies, it's not something rocket science, it's possible to implement and easy to implement if you know what to do and how to do. That's why I think that we really can strike the right balance between it.
Paula Palmer
Fantastic. I think we've made some good points there about using it ethically and being careful with data. Let's talk more about, perhaps the regulation of the sector. It's an increasing discussion around government intervention in AI regulation. Gareth, since AI is moving so fast, do you think the current regulations are enough or do you think they need to go further?
Gareth Lloyd
I think there's a huge conundrum for governments with AI regulation, because the main tech players in this space are bigger than the governments, and we're seeing that playing out a little bit in the US, with Elon looking to dismantle parts of the US government, replace it with Grok and a couple of 19-year-olds. We've been through this kind of cycle before, where we end up with tech, or in the case, I'm going to talk about telco companies becoming too big. The US in the 1970s and '80s broke up the telcos, because they were too big and becoming monopolies. We're kind of at the point where things like that need to be considered again, that these companies are becoming too big and too powerful and have too much data.
Now, that isn't going to happen in the short term, but I do anticipate people will start talking about that within 5–10 years. There's an even bigger challenge because this is not a problem just in the West, it's a problem in the East. If we regulate companies on our side of the pond, and we put ourselves at a big disadvantage compared to where China will be at. Regulation is a really thorny issue, and I don't think we're going to see effective regulation of the large tech companies.
That means it's even more important for us as individual companies, to make sure all our staff are aware of policies that we've got and that we do things in an effective way. My take on it is that the large scale of these providers is such that we need to take control ourselves, rather than relying on governments to do it for us.
Paula Palmer
We'll regulate ourselves and make sure that we're using it honestly and responsibly. Fantastic. Jon, can you tell me some more about AI and housing management, particularly in areas like silent tenant identification and service failure prediction? What do you think are the biggest opportunities for AI in housing in the next 5 years?
Jon Cocker
Obviously, there's been some big news stories over the last few years affecting housing, and one of them was the outcome of the Peabody report, where a tenant was unfortunately left in their property, deceased for a number of years, and they had not been checked on. We saw that, obviously, as a huge issue, and we wanted to make sure that we reflected on that in Platform. We worked using artificial intelligence to build a model to predict propensity for somebody to be silent.
We looked at a number of variables. Whether they've contacted us via the contact centre, whether they've had a repair, whether their gas has capped off or something like that, and then built an AI model to be able to give us a prediction, a percentage prediction of that customer being silent. Then what we did, we automated that process with a voice partner. It gives that customer a ring, and then they can press the option to say, "Yeah, number 1. Yeah, I'm absolutely fine. You don't need to contact me again." Number 2. "Yeah, I'm here, but I do need some help." Then the option of obviously, they don't press anything at all. At that point then that triggers a tenancy welfare visit.
What we're trying to do is understand that across housing, and other speakers have said it around, it's that prioritisation of tasks, because everybody's so busy that artificial intelligence gives us the opportunity to prioritise tasks and prioritise customers who really need that relationship management piece.
The other bit is service failure. I'm sure Platform is not alone. Looking at the Ombudsman's statistics, the complaints across the sector have exploded over the last couple of years, and there's advertising on social media, you see billboards with it as well, and the number of complaints has gone up. A model that we're looking now is predicting service failures. It's before the complaints process, and that's looking if a customer is ringing us multiple times, it's looking if a customer's got multiple repairs, and then putting an intervention in to the business before that happens, and then getting in touch with that customer before it builds into an actual complaint as well. Again, just changing that face of us from being a reactive organisation to a proactive one.
The stuff I get really excited about, and I'm sure other people with me is the data side of it. That's where my geek really kicks on. There's a pilot that I'm convinced that the pilot that we're doing at the moment, which I'm convinced is the future, where we've got a subset of AI data that we've now put behind an LLM, a large language model. What that allows us to do is to make natural language questions against our data and then get insights across those.
The reason I'm so excited about that is we've all got reporting teams in our business, and the business intelligence is such a hot commodity that people are constantly coming. If you can deploy that across your organisation with a user interface on your intranet, and you can ask any question of your data with security, obviously, making sure that people can't ask things that they're not allowed to access, but being able to democratise your data by being able to ask natural language questions to interrogate it all, I think that's the future. I think that's where we'll be going. I'm really passionate about that side as well.
Paula Palmer
That does sound really exciting, Jon, and really useful.
Jon Cocker
That's the thing with artificial intelligence. It's got to be useful. There's got to be a business problem that it's solving. It can't be technology for technology's sake, and that's a key thing about it. You should treat it like any other project. There's a business problem to solve, AI might be the tool in our toolkit to solve that. It might not, but it gives us another avenue to be able to explore.
Kate Stetsiuk
This question answering systems is my kind of favourite AI applications now, because it's really a given opportunity to people search between like a thousand of documents instantly, extract information that they really need and resolve queries just in a second. It's really power of AI, and it's what companies need to apply today definitely.
Paula Palmer
Brilliant. Gareth, I'm going to give you an opportunity there, and feel free not to answer this one, because I haven't prepped you for it. But do you want to sort of add in something there about Stonewater, some work that we're doing or any AI opportunities coming our way or tools that we're currently using.
Gareth Lloyd
At Stonewater, it's been a fairly organic process so far, and I really like that. I really like when we have champions around the organisation who start to push something like synaesthesia as an L&D tool, driving productivity in our L&D function, and pick it up and run with it. I'm always happy to back people doing that. It means you've got enthusiasts, advocates, champions. It means that it's very much them leading and technology supporting. I think that's such a positive way to see AI being used. There's lots of other use cases like that.
One of the people in the tech team, is one of the key supporters of our IoT programme. I think it's really positive to see those kinds of local initiatives growing and being developed. At Stonewater, that's been the approach so far. We will move in a more structured way, but I'm really keen on the organic approach.
Paula Palmer
Fantastic. Thanks. Jon mentioned AI democratisation. I did manage to say that right. The idea that AI is now more accessible than ever. Kate, can you tell us how can housing providers take advantage of this, and what advice would you give to organisations looking to experiment with AI?
Kate Stetsiuk
I think that AI democratisation today gives us an opportunity that AI is no longer for tech giants only, housing providers can benefit from it today, and the opportunities are really huge. I will divide it into some categories, like first is automation. AI can take over repetitive tasks like lease processing, responding to inquiries, handling maintenance requests and many others. We can think about it like, "Which workflows we can automate? Which repetitive tasks do we have?"
It will mean for us less administrative overheads and faster response for our tenants. It's an opportunity to give virtual assistance like AI-powered chatbots, provide 24/7 support, helping teams and improve overall processes in companies. Define what is possible to automate and there are plenty of AI opportunities to do it today.
Also, predictive maintenance is another game changer because instead of reacting to broken equipment, we can analyse our historical data and prevent it and predict some things like plumbing might fail, and we can proactively modify it. It helps prevent costly emergency repair and keeps building running smoothly.
Also, AI can analyse documents very effectively. We can scan financial transactions, documentations, flagging potential risks fully automatically, and helping, for example, housing providers stay compliant. Also, AI can give us huge amount of data insights. It can analyse trends, rates, maintenance costs and help providers to make smarter and data driven decisions. I'm already in AI around like 10 years, and I can say with full confidence, tell that AI is really a practical tool, and it really can improve efficiency, reduce costs and enhance overall experience.
About like advices for companies, how to start and how to apply it effectively, I will say that first of all is AI literacy, because as Gareth very well said, that if a company has AI champions, leaders, advocates, they can bring different initiatives into tables. We as company can help our people to educate about it. It can be like even simple workshops, training, some courses, just to give people understanding what AI opportunities we have, because without understanding these opportunities, people don't know what is possible to do. First of all, it's AI literacy.
Also, I think it's very important for a company who didn't apply AI, start small. It's possible, and it's the best way to identify a single pain point, and find the AI solution for it. It can be one small process, it can be one small improvement, but we need to give our team to gain some kind of momentum of AI, so they will understand, "Okay, AI really solve it for us. What else AI can do?" And people become more engaged in it. But start small, start from proof of concept, proof of value and then go iteratively on it.
The last thing is focus on return on investment, because now it's very popular to apply AI just for fun just because everyone else is doing it. We need to not forget about return on investment. We should calculate, what price of development of AI system and what these improvements will give to us and based on it, take decision and prioritise our AI initiatives. I think this is the main advice that I can give.
Paula Palmer
Fantastic. Thanks, Kate. Those are actually all really useful points. I could see both our guests nodding away there, so lots of agreement there. Gareth, looking ahead, what are the biggest risks housing providers need to prepare for as AI becomes more integrated into our housing services?
Gareth Lloyd
I think the first and biggest and maybe quite boring one is data quality. If we don't have the data quality right in our organisation, that foundational building block, there's no point just putting AI on top of it. All you get then is you get poor decisions. You get artificial stupidity rather than artificial intelligence. I think that's a really important thing for us as an organisation, as a sector, to understand and to deal with. Data quality has always mattered, but if we're going to leverage these new technologies, it matters more than ever now. I think that's a really key point.
We've talked a little bit about what AI can and can't do. One of the things it definitely does is hallucinate. It tries to fill in. AI works on the basis of patterns that it's learned from what's been uploaded to it. It will often fill in gaps, and I'll give you a live example of that. I asked it to write a job description for me, uploaded some information, and it added in lots of benefits that we don't happen to offer at Stonewater.
I think the idea of human-centric AI policies, where we make sure we always have scrutiny at the end of outputs, and we're happy to accept that it will hallucinate, it will have inaccuracies, and if it gets us 80 or 90% of the way there, that's fantastic. But always have that oversight at the end. I think there is a real paradox around this idea of explainability. The way AI works using some AI, using neural networks behind the scenes, it's designed to mimic the human brain. It's nodes connecting to one another behind the scenes. That's how it learns and develops.
The human brain obviously works in the same way. We have two levels of thinking in the human brain: conscious, like fast thinking, more rational and logical, and then you have the subconscious thinking, which is much more instinctive. The instinctive thinking is really fast and cheap to do. You get an instinctive response to something, but it's often a quite poor decision.
What we try and do as members of society is allow our conscious thoughts, which are more expensive to process, to control what our more base level of thinking does. One of the big issues in AI at the moment is all the thinking is being done in the logical, rational, expensive way. As it develops and grows, it's inevitably going to look at its own energy consumption and move to more subconscious thinking, which is often poorer decision-making that is less transparent.
I think there's a real tricky inflection point as AI develops and grows and learns, and it becomes less explainable to itself. It won't necessarily understand how it's taking those decisions. This is a computer programme that is designed to develop in the same way as the human brain works. When there's all the conversation about AI data usage being so high, it's very, very expensive to process complex AI queries. There's a huge need for loads of data centres, more efficient ways of energy consumption, et cetera.
This is why DeepSeek was such an interesting blip on the horizon. It's fairly inevitable that people are going to be looking at ways to reduce the cost of processing AI responses. Logically, if you look at the human brain, that means more of those decisions being the kind of instinctive, poorer decisions that we are trained to as humans to screen out.
Paula Palmer
But I think it circles back to our first point when we started talking about whether AI is going to replace people, and clearly it's not. We still need that sense checking that, making sure that AI is making good decisions, which is fantastic place to stop. We have covered so much today from opportunities AI brings to housing, to the challenges we're going to have to address and making sure that it's used responsibly. But one more question before we wrap up. Looking ahead, what's the one thing housing providers should focus on when adopting AI? Let's start with you, Kate.
Kate Stetsiuk
I think that companies should now realise that AI with us forever. It's already transform and continue to transform how we work. The one thing to focus is to educate yourself. Educate teams about AI opportunities, about capabilities, about AI limitations and start from now. Implement AI as at least in some use cases and try to work with it, experiment with it and because we will definitely do it. But if we start earlier, so it will give us more competitive advantage and more impact on our companies and processes.
Paula Palmer
Fantastic. Thanks, Kate. How about you, Jon?
Jon Cocker
Yeah, I'm going to build on Kate's point there a little bit because I completely agree. I'm going to go into an HR thing here. I think that's building AI literacy and capabilities into your workforce strategy and your workforce planning, because the jobs of the future are going to be very different where the knowledge is going to be available elsewhere. I think the roles are going to be shifting to more how we manage relationships and how you do things like that. Having a curious mindset, I think is going to be a key component of the staff member of the future. I think that's when you start redoing your job descriptions to try and encourage that type of person into your organisation that's going to utilise these tools, that is going to be keen to play on it. I think I'm going to go down the HR route as well, Kate, and have that people element of AI inbuilt into your organisation.
Paula Palmer
Thanks, Jon. Last but not least, Gareth, what do you think?
Gareth Lloyd
It's very similar. It's like that embrace and engage. There's no point sticking our heads in the sand. This is a massive opportunity for us as a sector. The more we can adopt those opportunities, the better.
Paula Palmer
Fantastic. Thanks. We're going with educate, embrace, engage. Lots of words there. Thank you to all of our guests for sharing your knowledge, your thoughts, your insight. AI, like Kate says, is undoubtedly here to stay. As we've heard here today, it's success in housing depends on how responsibly we use it. That's all for today's episode of On the Air. Thank you to our guests and listeners for joining us. We'll be back soon with more conversations on the future of social housing.