IEEE AI Coalition Podcast: Episode 1

A Conversation with Daswin De Silva and Christine Miyachi

Part of the IEEE AI Coalition Podcast Series

 

Episode Transcript:

Hello, welcome to the IEEE AI podcast, brought to you by the IEEE AI Coalition and IEEE Future Directions. I’m your host, Daswin De Silva, and with me as co-host is Chris Miyachi, chair of the IEEE AI Coalition. Hi, Chris. How are you doing? Good. I’m happy to be here, and I am very proud to be part of this inaugural episode of the IEEE AI podcast. And since it’s our first episode, let’s introduce ourselves. My name is Chris Miyachi. I am a senior software engineering manager at Microsoft, where I work in the cloud and AI division, and under that, in health and life sciences. Daswin, what about you? Yeah, thanks, Chris. I might also add that you are a fitness professional and a certified yoga instructor. Excellent. Yes. Well, actually, mindfulness and consciousness might become intersecting themes in future episodes as we start to talk about the quest for AGI. I would love to hear more about it. Exactly. Yes. So, me, I’m a professor of AI and analytics at La Trobe University in Melbourne, Australia, only just 10,000 miles from where you are, Chris. Hey, you should know that I’m running the Sydney Marathon in August, so maybe we can meet up in real life. Yeah, definitely. Yeah, I’ll lock that in for August. That’s good. So, yeah, my research and teaching expertise is in AI, rather diverse algorithms, systems, ethics, and I’ve received a few awards for both teaching and research, and I’ve taught AI micro-credentials to industry, which keeps my knowledge and experience current, and also supervised PhD topics on diverse areas in AI. Okay, so since we are on the topic of introductions, maybe a slight cliche, but I think we’d have to introduce AI, also given it’s our first episode. So, when I speak to students and industry partners, I pick on three definitions for AI.

One is from the very first AI workshop in 1956, where it was a proposal for a summer research program at the Dartmouth College, and this is where the term AI was first introduced, and those experts in their fields described AI as, I’m going to quote here, so how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and also improve themselves in these activities. I quite like this description, because it wasn’t realized in 1956 or the 70 years afterwards, but this vision for AI continues to date. And then, if listeners are interested, a more recent definition of AI is from the Organization for Economic Cooperation and Development, the OECD, which is basically a consortium of the first world countries, and they released an updated definition of AI following things like CGPT. This was in the middle of 2023, and there was a significant backlash against this first version of their definition, mainly because individuals on social media or people in focus groups did not experience AI in the manner that they’ve described it. So, this is also relevant to all of us, that AI is an evolving field, it is an evolving technology, and this means the definition also tends to evolve. So, what we did as AI or what we had as AI in the last 10 years is quite different to what we have today, and also what we might have in the next 10 years. And the third one, finally, is quite simple. This is from the CEO of Microsoft AI, Mustafa Suleyman, who describes AI to his six-year-old nephew as a new digital species. And I think listeners would probably relate to this in terms of if they’re using Copilot, for example, which is a digital companion. So, an extension of this is a digital species at large, if you think of all the different applications where AI might be used. So, yeah, I’m going to leave it there, but we can also pick up on different types of introductions in later episodes. Thanks, Dawson. That’s so interesting. I particularly like the old Dartmouth, the original 1957 definition. I think that just brings – there’s been work – you know, AI, a lot of the developments have been really in the news in the last five years, but it reminds us that AI started – it was 1957 in that meeting at Dartmouth, in that conference. It was a small conference that the definition of AI was created, and it’s been that long, and the research has been going on consistently since then. But the purpose of the podcast will be to look at recent developments in AI, gen AI models, machine learning algorithms, AI applications. We’ll also look at AI ethics, which is a big focus of the IEEE regulation, and just some headlines that are happening in AI locally and across the globe. Yeah, exactly, Chris. That’s right. So, I’d like to touch on the IEEE. So, as the largest technical professional organization in the world, the IEEE is foundational for advancing technology for humanity. And in this case, it’s advancing AI for humanity. So, we want our listeners to be responsible and ethical consumers, builders, operators, regulators of AI. And this is what we aim to achieve with this podcast.

Yes, and I just – I want to give the listeners some background on the IEEE. We just reached 500,000 members, which is a big milestone for us. And you and I, this podcast is part of Future Directions, as is the AI Coalition. So, the AI Coalition is a group of – this is all the societies and councils across IEEE, and it brings together all of the content that’s happening all across IEEE. So, there is a lot going on, as you can imagine, in the area of AI. And then Future Directions brings in new technologies. And the purpose of Future Directions is to bring new technologies into IEEE. So, on that note, we’re excited to have this podcast because I think that’s going to help. Yeah, of course. And also the fusion between other emerging and new technologies and the sort of embedding or the application integration of AI into these technologies. So, for example, if it’s blockchain and AI or metaverse and AI, AI has this capacity to augment a pre-existing or an emergent technology. And that’s really something worthwhile pursuing, which is not something perhaps organizations or even universities have the time and energy to focus on. But given the large nature of the IEEE and the diversity of experts within the IEEE, I think we have a lot of potential there. For sure. Yeah, that’s right. So, as this podcast continues to evolve, I believe we will have guests from the different IEEE stakeholder groups that make up the coalition. So, this means our listeners will certainly hear from this broad cross-section of the IEEE AI efforts and also the broader AI community. Great. Yeah. So, given that there’s been a few headline pieces recently, I’d like to unpack some of these, especially from just the first few weeks of 2025. So, we’re just into March, but there’s been at least three big releases that are sort of indicators for what’s going to happen, what’s going to come up in the rest of the year, and also how applications, how we use AI is likely to change or improve. So, the most recent one is what’s been slightly hyped up as the Super Bowl of AI, the NVIDIA GTC or the GPU Tech Conference, which was held in San Jose just last week. So, this conference has been around since 2009, but it used to focus on semiconductors and supercomputing, and now it has transitioned into AI, robotics, self-driving vehicles, and even quantum computing. So, just as a bit of a Super Bowl act, NVIDIA CEO Jensen Huang delivered the keynote address where he first revealed the Blackwell GPUs, which is the next generation of their GPU. The graphical processing unit is fundamental for how we build AI.

So, this was announced last year, but now it’s ready and it’s ready to be shipped to cloud service providers, and he had this fairly large, large contrasting graphs. So, they’ve sold 1.3 million Hopper GPUs, which is the previous generation, 2024, and up to March, just in three months, they have orders for 3.6 million Blackwell GPUs. So, Chris, this is the significant appetite, demand for computing. Yes, that is amazing already. I wonder what it will get to in 2025 if they’ve already got that many orders, yeah. Yeah, so, and this sort of leads up to the second reveal, which is what he called AI factories. I feel like that could be, a factory brings a different perception to mind, but let’s go with it for now. So, he claims that the enterprise and the manufacturer, or any organization, any large organization really, will need two types of factories, a production factory and an AI factory. So, the AI factory will provide intelligence to the production factory. So, I think this is many sort of open-ended, blue skies use cases of things like deep hyper-personalization, significant attention to detail in terms of designing a product, and then having design embedded as part of the manufacturing process. So, I doubt any organization at present are thinking of these use cases, but it’s quite exciting from one end that the leading technology companies are preparing for this sort of evolution of the processes and functions of standard organizations. So, yeah, lots happening in that space as well. The third takeaway for me was that he presented an AI timeline. So, this is important for many reasons. So, I’m from academia, and we need to make sure students are future ready, they’re graduates for future jobs, and the future of jobs itself is evolving. So, having these timelines helps us to predict, to some extent, with a lot of caveats, what’s going to happen three years later when they graduate, so that we can, at some level, we can adjust curriculum and the focus of their studies to suit such jobs within the three-year timeline, or even six years, looking into the future. So, he starts with deep learning, the emergence of deep learning with Alex Nett in 2012, and this is followed by things like perception, AI, so speech recognition, medical image analysis, off the back of deep learning, mostly. And then, since 2022, is generative AI, so ChatGPT and a whole suite of models. And from this year onwards, it’s agentic AI, so agents are going to be our digital companions, which is a step up from chatbots. So, chatbots with generative AI and agents have a bit more autonomy to address diverse needs in different tasks, but also make some level of decisions. We’re yet to see a fully-fledged agentic AI framework in practice, in action. We see some of this in coding. We’ll get to that shortly when Chris talks about her work environment.

But also, the next phase is physical AI, so where the AI takes a physical form, such as humanoid robots and self-driving vehicles. So, DeepMind and OpenAI have also released similar timelines, so listeners, if you are keen, then these are some high-level predictions for what’s yet to come. And this is obviously based on lots of research, and also the experts themselves are providing these timelines. So, it’s not set in stone, but it does give us an indication of the next iterations. So, yeah, a couple of other announcements. In February, we had OpenAI release GPT 4.5. And I think the main takeaway there for me was that 4.5 was not a reasoning model. And this sort of indicates that there’s probably two directions where the companies and organizations building AI models are heading towards. So, one is scaling up on compute for pre-training, and the other is scaling up on compute for inference. So, the best example for scaling up on inference is DeepSeq, which was released in January 2025. And DeepSeq was able to surpass OpenAI’s reasoning models on several reasoning benchmarks. And more importantly, DeepSeq open-sourced the entire models, the code and the weights, are free for both commercial and personal use, which is a significant departure from the practice adhered to by OpenAI and a few other companies. So, I think there’s going to be lots of new announcements this year, but these three big releases gives us an indication of what’s to come towards the latter part of this year. Chris, your thoughts? Yeah, I was interested in what your thoughts were on OpenAI, the models being closed, versus what DeepSeq is doing and how that’s going to change things. So, firstly, we always spoke about this democratization of AI. And DeepSeq is the first sort of realization of this, which is really beneficial for the sector. It’s not just to build better models, but also to reduce the suspicion, the doubt cast over AI. So, I think there’s benefits across the board. And this is something that the other organizations should also take seriously, because through their own sort of diversity or their preferred approaches to revenue generation, there’s been a side effect of this is that the social element has been disconnected or we’ve left the mainstream, the individuals who are consumers and who, in some ways, provide the data to train these models have been disconnected from the progress and the benefits of AI. So, open source is probably a first step towards addressing this huge gap. Interesting. Thank you for that commentary. I wanted to comment.

I mean, I love the way you’re running through everything you heard last week from this conference and in the news. And I want to talk briefly about some of how AI is impacting my work as a software engineer and a software engineering manager. So, at Microsoft, we have Copilot and GitHub for Copilot. And we, just in the last year, we are heavily using it. In fact, our managers, they don’t measure us. They want to see how much we’re using it and they want to see that we’re using it every single day because it improves accuracy. It helps us move faster. So, I’ll give you an example of how I use it. I often take smaller bug tickets away from my engineers so that they can stay completely focused on really important, difficult work because they need to be not interrupted. So, I’ll take these smaller bugs and rather than bothering them about it, I’ll use GitHub Copilot to figure out how to solve it. And GitHub Copilot will actually modify the code for me if I use it. So, it is amazing in that way and it keeps getting better. And we also use it, all of us use it for design. Like every day at work, we’re in such a fast-moving environment that there are new terms, new topics that I don’t know what they are, and I’ll co-pilot it, and I’ll get all the information I need. And it’s done in a language-friendly way. It’s not just like a search engine. And it also shows me the sources if I want more information. And, you know, particularly my two groups at work, I manage two teams that are extremely mature teams, very high-level engineers, very experienced, and we travel around to different projects. So we might be on a project for anywhere from three months to a year, and we have to just get up to speed very quickly, and often with new languages that we haven’t used before. And we use co-pilot all the time for that. We understand software, and we usually have about three to five languages that we’re pretty good at, usually one or two that we’re expert in. And then when we pick up another one, we just need a boost. We don’t need to take a class on it. We need just to know some information about it. So, you know, we use it on a daily basis.

It is helping us quite a bit. And I just see it helping us more and more and more. I don’t see it replacing our jobs just yet. Yeah. But I have used it to actually write small applications. I’ve actually had it say, can you generate an app that does this, you know, something simple, and it will do it. And that really gets me started on what I need to do. So, yes, it’s extremely powerful, and I can see it just doing more and more for us. Yeah, well, that’s wonderful to hear. Yeah, so I think it’s the collection of these activities or subtasks from problem solving. So, you know, four-pilot-led solutions, which have close human supervision. And also the transferability from one programming language to another, which is really the programming expertise traveling from one language to another with the support of this digital companion. Exactly, yeah. You know where else it’s really, really helpful in is with infrastructure work. So deployments to Azure, and either with Terraform or Bicep. It’s very technical, a lot of tacit knowledge. And if you don’t, you know, you can give it a snippet of code and say, this is happening. Yeah, and the error that’s happening, it will help you. I’m doing something like that now, where I’m deploying something for the first time, something really big, that’s complex, and I’m getting a lot of errors, and it’s helping me at each stage of the way. It’s almost like having an expert right there with me. So, yeah, it really is an amazing technology. And just within a year, Daswin, that’s how soon it’s been. Yeah, that’s really the impressive part, how quickly it can be transformative in different organizational contexts. That’s quite exciting. So, one other aspect which we tend to forget is the learning or the tutorial value of using these models, because you sort of shortcut the knowledge acquisition process within an organizational setting, not so much.

Exactly. So, we find new things fairly quickly, and it’s sort of immediately relevant to our work task, which we are currently engaged in. So, that’s a significant uplift in productivity, because if you think of the alternative, you’d be searching on search engines, looking for textbooks, or looking for points of reference, or scrolling through API documentation, which is a couple of hours at least to find the exact information you need. But in this case, we have the expert with a lot of knowledge from different types of code examples ready to go, just on your fingertips. Exactly. Yeah. That’s great. So, I think two words come to mind in terms of how – so, what you described, Chris, is really useful, and it’s a good explanation of the evaluation of language models, or the large-scale evaluation of generative AI, because of all the AI models, there’s even a large language model leaderboard on Hugging Face. So, there’s evaluation metrics, benchmarks, which are different datasets created with a different focus in mind, so exam questions, or Q&A, chat, conversational-type questions. But I think the best evaluation metric is this level of application integration in an actual workplace, and where the staff, the employees, actually see the change in their work routines, the productivity uplift, and a true impact to how they work. And this is why certain groups, certain researchers in the early days of 2023 started calling this Gen-AI, a general-purpose technology, which has this transformative effect across the economy. Every sector, every discipline is transformed or impacted through the use of AI. So, this can go both ways, negative and positive, to things like technological displacement, also in the conversation. But if you think of the productivity uplift, which is about, in that paper from 2023, co-authored by researchers from OpenAI, slightly biased, but also University of Pennsylvania, and the title was GPTs are GPTs. So, the second GPT is a reference to general-purpose technologies. So, what they claimed is that they, aside from the methodologies, they started looking at occupations in a US database, Occupational Information Network, or the OSTAR net. And this has occupations broken down into subtasks. And they used the combination of human expertise and, slightly biased, but also GPT-4 classification of these activities as, you know, a simple binary is, can an AI do this kind of task, or can they not? And then they aggregated all this information for thousands and thousands of occupation types. And their summary statement was that there is likely to be a productivity uplift of about 10% for about 80% of the workers in the economy. So, and if you think of this 10%, it’s approximately an hour saved within the workday. And your example, Chris, and also some of the examples from my university where we have co-pilot licenses provided to lots of staff within the organization. The general consensus is that this, what they theorized in the early days is actually correct. So, there is this uplift of about one hour saved from the workday. So, this is the first stage, really. And now, some of the opponents are asking, is it only good for writing emails? But this is sort of the big question asked from the AI companies and also the AI builders.

What’s next, really? Because there’s lots of investment in hardware and also development. So, OpenAI was valued at about $300 billion early this year. So, we’re all sort of waiting for the next inflection point of potentially an innovation jump. And if the previous or the current is a productivity uplift, then the next is likely to be a jump in innovation and most likely led by these agentic AI frameworks. Okay. That’s so interesting. Yeah. Yeah, that’s right, Chris. And lots of new things coming up. And I think that also sort of calls out to the purpose of our podcast. I know there’s lots of other podcasts on AI. But I think there’s still, we’d hear from our listeners, but I think there’s lots of demand for relevant, current and useful information that gets individuals, organizations thinking about the role and the applications of AI in their own work activities. And also thinking strategically about planning to integrate AI into their organizations. Yeah, so that is a wrap for this week. Stay tuned for our next episode.