SecDevOps.comSecDevOps.com
AI Use Cases That Actually Fix Engineering Bottlenecks

AI Use Cases That Actually Fix Engineering Bottlenecks

The New Stack(today)Updated today

This is an excerpt from Chapter 3 of “AI for the Enterprise: The Playbook for Developing and Scaling Your AI Strategy,” a new ebook by acclaimed tech journalist Jennifer Riggins and sponsored by Red...

This is an excerpt from Chapter 3 of “AI for the Enterprise: The Playbook for Developing and Scaling Your AI Strategy,” a new ebook by acclaimed tech journalist Jennifer Riggins and sponsored by Red Hat and Intel. From the advantages of using the “two-speed” AI investment model, to measuring the real impact of AI, this free book, now available for download, helps enterprise leaders create an AI strategy to unlock productivity gains, solve previously impossible problems and gain a true competitive edge. If writing code was never the bottleneck and code generation is already widely adopted, then organizations need to look elsewhere for new AI use cases. There will be cases of one-size-fits-all AI. SAP’s most popular AI use case by far is scanning and processing receipts — something many employees do. But more likely, AI will solve for smaller pains, often specific to roles or functions. Keep things lean and simple by starting with early wins that could stably grow in the future. Then, once a use case is proven, you can adapt it for other teams. While there are plenty of nontechnology problems to solve, the first AI use cases tend to be incubated within tech teams, Hannah Foxwell, founder of AI for the Rest of Us, said. It feels safer to fail in the tech division because the internal data used by tech teams may not have as much impact as finance, human resources and customer data. AI could have a greater impact in those areas, but they also have a much higher risk. Documentation and Technical Debt Developers are consistent in explaining what most slows them down: insufficient, out-of-date documentation and technical debt. Neither of these are issues most devs want to fix themselves. Both also worsen with a rapid increase in code — and both are use cases where AI can shine because it is great at having conversations and explaining complexity. In fact, the biggest AI win uncovered in the 2024 “DORA” report found that a 25% increase in AI adoption triggered a 7.5% increase in documentation quality. Take this proven use case companywide by adding a chatbot overlay for internal information, ownership and processes. Rachel Laycock, CTO of Thoughtworks, finds that legacy modernization is still the top challenge facing most enterprises. “It’s not clear when we will get to a settled space of the top three or five tools and models people use,” she said. Additionally, she said, people are most often building “greenfield” apps, which is easy compared to building features that have to be integrated into legacy applications. The market, she continued, is “focusing too much on efficiency of producing code, which isn’t actually the problem.” Code Review Another bottleneck that increases with development speed is code review, which leaves developers waiting longer for feedback and increases cognitive load. “Previously, quality was derived from having human eyes at two steps — the developer who wrote it and the PR [pull request] reviewer,” Foxwell said. “Now the developer who ‘wrote’ it may not have read it. It may have been written by AI. It looks fine to ship, and the person reviewing it is faced with a mountain of code to review all at once.” AI is also useful here, observed Nathen Harvey, DORA lead and product manager at Google Cloud. By having the right specifications, documentation and guardrails in place, you can provide rapid, automated feedback to developers using AI as a first-round PR reviewer. Then AI can assign the PR to the peer with the availability and the knowledge to review it. This review also takes less time and is more constructive because this human in the loop can be confident that the PR adheres to established organizational standards. Unlocking Insight in Data Some organizations, like in financial services or governments, have 100 years’ worth of data that they can finally unlock with AI. However, that process may not be an easy one, especially within budgetary, regulatory, privacy and security guidelines. When thinking about AI use cases, Red Hat Senior Product Marketing Manager Marty Wesley remarked: “The easiest ones are in financial services, because there’s just so much data, and they can use the AI to help collect a lot of that information and make decisions on it.” Consider the often tedious, in-person, days-long experience of opening a commercial account, which requires companies to submit all kinds of structured and unstructured data across different documents. AI can read the existing forms and get a sense of the business, allowing the bank to move the whole process online in a much shorter time frame. And this isn’t justifying staff layoffs at these “accidental tech companies,” but instead restructuring for value by reassigning staff to other areas that have issues, problems, or just need more help, Wesley recommended. Of course, setting up an account is a repeatable AI use case common to all businesses, from car insurance to university applications. Another AI-driven use case is scanning unstructured data in images and videos more rapidly than humans could ever imagine. For example, Boston Children’s Hospital and a transportation company are using Red Hat technology to uncover patterns of disease and inspect railway tracks using drones, respectively. Boston Children’s Hospital says its AI radiology tools “are as good as the human radiologists, and they’re going to get better,” Wesley said. Safe Experimentation If you think you had tool sprawl before, wait until you add in AI. No organization can afford to adopt every tool out there, but they also can’t afford to miss out on the next AI productivity gain. Enterprises need a robust culture of experimentation supported by clear permissions, processes and guidelines. A good first step is encouraging teams to A/B test different products and offer feedback, like software development company Devexperts did when testing and comparing code generator tools Copilot and Cursor, then sharing test methods and results. At this speed of change, your chief AI office will likely become a center of excellence for AI experimentation. An AI pilot or proof of concept (POC) is “your chance to make sure a tool will actually deliver results for your organization: improving developer experience, accelerating development and ensuring your engineering practices still align with your foundational definition of excellence,” explained DX CTO Laura Tacho. To read more, download “AI for the Enterprise: The Playbook for Developing and Scaling Your AI Strategy” today! The post AI Use Cases That Actually Fix Engineering Bottlenecks appeared first on The New Stack.

Source: This article was originally published on The New Stack

Read full article on source →

Related Articles