Hello, Guest!
Pragyansmita Nayak

Hitachi Vantara Federal’s Pragyansmita Nayak Dives Into Data & AI in the Federal Space

As chief data scientist of Hitachi Vantara FederalPragyansmita Nayak leads the company’s work to provide government agencies with top-notch data analytics offerings. A self-described “passionate data scientist,” she has spent more than two decades in the federal market and has published numerous articles discussing data, artificial intelligence and more.

Nayak recently sat down with GovCon Wire for an Executive Spotlight interview, during which she examined the current state of the AI field and shared her insights on the direction in which it is headed. She also broke down the challenges that federal agencies face with AI adoption and discussed the impact of this technology on today’s defense landscape.

Read the full Executive Spotlight interview below.

Tell me about the current state of the artificial intelligence market. Where are you seeing new opportunities in AI, and where do you think the market is heading?

I’ve seen a consistently increasing interest in AI and machine learning. Agencies are trying to use it to meet their mission and goals and are working to better understand where they are going with it. You can see these efforts in relation to the latest Executive Order from the White House and with agencies delivering their own data strategy and even AI and cloud strategies. That is proof that this is not just interim hype or interest in AI.

That being said, 2024 will definitely be the year where we see the value of AI being delivered. If we look back at 2023, it was the year where AI caught the popular fancy with ChatGPT and large language models, bringing it to light for those outside the AI field. I expect that in 2024, we will start seeing more interesting applications and looking for the next big value delivery application, algorithm model or approach to AI/ML. At one point it was deep learning, then came natural language processing. What’s next? What will be the 2024 thing?

I also think that we will see more impetus in traditional machine learning and small language models as we realize that LLMs cannot be deployed and used everywhere, and there is a lot of work in large language models around which we don’t want to give away our IP. As a company, when we are trying to work with large language models, there are approaches to work around that.

AI is being used in every domain today, be it manufacturing, finance, healthcare, retail, agriculture or public sector, and I expect that to keep expanding as we get more value out of it. There are other domains within the AI/ML space, such as edge computing and natural language processing, that I think are going to get much better use of these technologies.

As these AI/ML applications increase in usage, we’ll see much more growth in the space of ethical AI, bias mitigation and explainable AI. These three things will become more important as well as more of an interdisciplinary collaboration across different fields.

What role can AI/ML play in enhancing sensor networks and data processing capabilities, and how is your company harnessing the technology in this area?

AI/ML ties with edge computing, or getting your computing and data closer to where the data is being generated and where processing needs to happen so it is faster, not impacted by network bandwidth and connectivity and can meet needs in a distributed manner. There are challenges with getting data back onto the core. If you want to use that data in the long term and future to do some type of collective analysis and aggregation of the data, you will need that back to the core, so it is important to filter out what data is needed.

Edge computing is primarily these sensor networks. Data is being collected at these remote points that often have very hostile conditions and limited bandwidth. There are a variety of ways in which these sensors work. The data may be collected per second or at a lower frequency, and sometimes the sensors collect data at a higher frequency, maybe on a weekly basis. AI/ML will help with that type of complexity, tackling the variety of data which a sensor can produce and the varied processing capabilities in terms of understanding the data. It will also help with filtering the data to figure out what is important and what needs to be kept for the long term.

All of these add a lot of complexity when working with sensor networks. Where Hitachi Vantara Federal comes in is with our years of information technology and operational technology experience. Hitachi has its roots in manufacturing, industrial equipment, heavy machinery and consumer electronics – and our digital expertise and capabilities grew around and in support of that domain for the past 60+ years. Hitachi’s vertical expertise is perfectly suited for the convergence of IT and OT realms, so that experience of working with sensor-based data makes us prepared and ready to be able to work with this complexity today and deliver value.

What are some of the key barriers that remain in widespread federal AI adoption, and how do you think we can overcome them?

One of the key barriers in widespread federal AI adoption is that the effort has been broad, but each agency is trying to do its own thing. Though every agency has different problems to address, there is a lot of collaboration that can happen, and I’m expecting the Executive Order to get that going. The second factor is the lack of a skilled workforce that is familiar with the AI development cycle. The government is addressing this challenge through one of the criteria in the Executive Order, which aims to expand the workforce and make it easier to hire AI/ML specialists and upskill and re-skill individuals.

The third is that AI/ML solution development is somewhat different from the way traditional software development happens, where you have set requirements or you can think through your requirements in advance before even you start implementing. In AI/ML, it’s often more agile in nature, and as you are going through your implementation and your analysis, you will uncover new trends, patterns and quirkiness in your data and certain behaviors will emerge that you were not aware of before.

That remains a barrier because the way the federal contracting process happens through the RFA-RFP cycle does not currently account for this type of agility. The contracting process needs to evolve and have certain criteria constraints and assumptions that encourage this exploration of the solution space as you’re going through an AI solution implementation process. This would ensure that somebody designing an AI solution and going through this cycle doesn’t get penalized due to timelines that may cause them to miss areas of opportunity.

What kind of tools and technologies can organizations use to make their data more accessible and understandable?

Every solution that you read about will say that we are breaking the data silos, but the data silos are there for a reason, which is the lack of trust among different components within an organization. There is certain data which is sensitive in nature, and you don’t want it to be widely accessible.

Admittedly, data silos can also be caused by technology challenges. Sometimes, agencies are unable to work across these different places where data has been moved or they had to meet a certain compliance or regulatory need due to which their data has become siloed in these either on-premise or one or more of the cloud providers.

Hybrid cloud technology platforms or anything that gives them that flexibility of working on data as one holistic thing is important. What helps in that process is good data integration, data orchestration capabilities and a data platform capable of effective metadata management so you have the metadata, or data about data. This information allows you to do a wider search across your data corpus and find data relevant to a specific problem.

A tool in that space that makes that sorting through metadata more convenient is a data catalog. An effective data catalog makes data more accessible and helps you get more value from your data. As you access your data, you can work with it to do more set applications and get even more return on your investment on that data which is being stored.

In that process, there are various concepts and technologies, such as data mesh architecture, that are growing in prominence. This approach means that each domain is responsible for their given data, they make their data accessible to others as data products and there is interchange of data products and metadata across all domains. That is a concept which is still growing.

How do you make information easily accessible and searchable across all domains? Knowledge graphs and ontology with a data catalog is something I’m hearing a lot more about in the journey to get more value out of data. I expect to see more innovations in that space to make the whole of data architecture scalable and resilient.

What’s your outlook on the global defense landscape? What significant changes or trends are you seeing, and how are those factors moving the GovCon market?

The global defense landscape is changing because warfighting is rightfully more electronic today. You are always told to think like the adversary, and it has become increasingly important to do so in this form of warfighting. You have to be on par with your adversary or even ahead so that you have better insights into what type of access they have to your network and data.

Social media has been used to spread misinformation, and that is not something which was possible before, so we need to keep watch on it. I think misinformation is one of the key parts of warfighting now.

Apart from being ahead in terms of your AI/ML technologies, being aware of different data types and making sure your data isn’t exposed are emerging as key parts of algorithmic and electronic warfare, where AI/ML is being used to get that much needed step ahead. Having a more automated form of leveraging data, ML and pattern analysis capabilities is a huge advantage in this form of warfare. We should use these technologies for both for the defense enterprise and the warfighter, and we should do so in a way that is intuitive for the warfighter – you can’t expect everyone to be a data scientist. ChatGPT gained prominence among more powerful AI/ML capabilities because it is user friendly, it’s intuitive. When we are delivering solutions for the warfighter, it has to be intuitive enough. It has to be in their line of functioning so that it is organic for them.

Video of the Day

Related Articles