JPMorgan Is Using AI to Write Staff Reviews

November 15, 2025News
#AI in Human Resource
10 min read
JPMorgan Is Using AI to Write Staff Reviews

At JPMorgan Chase, managers have a powerful tool right at their fingertips. With just a few clicks, they can open an internal chatbot, share some details about their role and experiences from the past year, and—voilà!—a polished performance review draft pops up in mere seconds. 

It might feel like magic, but it’s actually the bank’s in-house “LLM Suite” at work, turning those simple prompts into professional, ready-to-submit prose. All the manager needs to do is make a few personal touches before sending it off.

Now, while this might sound like a typical JPMorgan story, big bank, significant tech budget, and state-of-the-art AI, the real intrigue lies beyond the logo and the scale. What’s truly captivating is how this system is transforming the way managers approach their work. When things get challenging and require that essential human touch, the strategy is refreshingly straightforward: let the software handle the heavy lifting first, and then just give it the thumbs up.

What JPMorgan actually turned on

According to the Financial Times and HR trade publications, JPMorgan is rolling out an exciting new option for its employees: they can now tap into the bank’s internal AI system to help craft their year-end performance reviews. This innovative tool is powered by the LLM Suite, JPMorgan’s proprietary platform, which has already reached about 200,000 users in just eight months. But it’s not limited to performance reviews; it’s also being utilized for code reviews, investment banking presentations, and legal contract summaries.

The process is quite simple. A manager inputs prompts detailing an employee’s role, their goals, and significant events from the year. The model then generates a structured review draft that reads like something you’d expect from a thoughtful, HR-trained supervisor. The bank provides clear guidance: this generated text is just a starting point. Managers remain accountable for the final wording, and importantly, the tool is not intended for making decisions about salaries or promotions.

This initiative is part of a much broader strategy. JPMorgan has plans to invest around $18 billion in technology by 2025, with up to $2 billion specifically allocated for AI. Bank leadership is openly discussing how AI is poised to transform every job within the firm, from fraud detection to customer service. The LLM Suite sits at the core of this transformation, offering secure access to leading third-party models and empowering employees with powerful AI tools.

At first glance, the review bot might appear to be just a harmless productivity feature. However, it’s actually reshaping the very essence of what it means to be a manager.

The real story: managers are being deskilled

In a pre-AI world, performance reviews required managers to engage in a specific kind of heavy lifting. They had to reflect on a year’s worth of work, sift through their notes and memories, distinguish between what was significant and what was not, and then condense all of that into a coherent narrative: here’s what mattered, here’s where you grew, and here’s what needs to change. It was one of the few moments where genuine judgment, rather than mere administration, was essential.

Now, picture that same manager with a chatbot at their disposal. Instead of grappling with a jumble of information, they can simply write a rough prompt: job title, a few accomplishments, and a couple of challenges. The system then generates three tidy paragraphs. The temptation to skim through, tweak a sentence or two, and paste it directly into the official review system is hard to resist. Over time, this shift alters which skills and instincts they rely on.

Research highlighted in discussions about JPMorgan’s initiative shows that when people lean on AI assistants, they often outsource more of the analytical work than they realize. They become less likely to critically examine the evidence themselves and more inclined to accept whatever the system presents.

This poses a significant risk for employees. The real danger isn’t a sinister algorithm deciding their fate; it’s a manager who has gradually stopped practicing the more challenging aspects of their role. While they may still sign off on your review, they may struggle to explain, defend, or take ownership of it because they never fully formed that perspective themselves. Instead, they become mere editors of machine-generated text rather than authors of a thoughtful evaluation of your work.

Why employees are right to feel uneasy

Once AI takes the lead in writing performance reviews, the tone of these evaluations begins to shift noticeably. Outlets like Bloomberg have cleverly labeled this trend as “AI work slop,” highlighting a key issue: while the text produced is often grammatically correct, it tends to be emotionally flat and surprisingly interchangeable from one employee to another.

You can see this change happening already. Reviews may sound smoother, but they often lack meaningful content. Instead of specific stories and examples, you’ll find generic phrases like “strong collaboration” or “growth opportunities.” The distinction between an outstanding year and an average one is reduced to just a handful of adjectives. Over time, both high and low performers might end up reading nearly identical templates, with only minor tweaks. When employees start to suspect that much of what they’re reading was generated by a bot, it’s easy for them to dismiss both praise and criticism. The document stops feeling like a genuine assessment of their work; it becomes just another formality that needs to be completed.

Another crucial element that suffers in this process is advocacy. In most organizations, when it comes time to make decisions about promotions or raises, managers are expected to champion their team members. They should enter calibration meetings armed with insights drawn from a year of observation and reflection. However, if they’ve let an AI system handle most of that heavy lifting, they’re more likely to depend on whatever labels, phrases, or ratings the system has already churned out. This reliance diminishes their ability to advocate effectively for you in a room full of other managers, as they haven’t engaged deeply enough with the material to build a compelling case.

The bottom line is that while performance management might look cleaner on the surface, it feels emptier underneath. The process becomes faster, and the documents appear more polished, but the human substance that gives a review its true significance becomes increasingly thin.

The straight line from drafting to scoring

Another important point to consider is that the difference between “AI that writes” and “AI that scores” is much smaller than it seems. 

JPMorgan’s tool is designed to help managers draft performance reviews. However, the way it works, collecting data about employees, analyzing it, and creating structured outputs, is similar to what other companies use to give ratings and labels.

For example, the HR platform Rippling has a product called Talent Signal. It reviews a new hire’s first ninety days and assigns one of three tags: “high potential,” “typical,” or “pay attention.” Managers receive reports with specific examples from tools like GitHub, Salesforce, or Zendesk to explain these tags.

KPMG, one of the Big Four accounting firms, is taking this idea even further. Starting in 2026, employees will be evaluated not only on what they produce but also on how well they use the firm’s AI tools, with data from systems like Microsoft Copilot influencing their reviews. KPMG leaders have stated that using AI will become a formal part of performance assessments.

According to KPMG’s guidance on using AI in the workplace, over 80% of leaders plan to include generative AI in performance reviews, and nearly 20% are already doing so.

These systems all work in a similar way: they gather data about what you do, use AI to summarize and label that behavior, and then feed those summaries into a human review process. When JPMorgan’s LLM Suite helps a manager draft a review, it’s already organizing information about your performance. Changing that information into a score or label isn’t a new technology; it’s a choice made by the company.

This connection is important. Right now, AI helps shape the words your manager uses in your review. In the future, the same system could assign a label to your profile, and a manager who isn’t used to making independent judgments might accept that label as the truth.

This is not just JPMorgan

JPMorgan is a significant example because it is a large and well-known bank, but the trends we see there are not unique to global banks.

On one end, you have companies like JPMorgan that are building their own AI platforms, investing billions in AI infrastructure, and integrating tools like LLM Suite into everyday tasks, including performance reviews.

On the other end, professional services firms like KPMG are changing their performance systems to include AI as a key factor in how employees are evaluated.

In between, companies like Rippling and Workday are offering AI-based review drafting and early-signal scoring as part of their HR products for smaller organizations. Analysts are already saying that Rippling’s Talent Signal and its AI-generated reviews could become standard features in performance software, rather than just a novelty for early adopters.

Your organization might choose not to use these tools at all. However, it’s also possible that one day, an update will introduce a new button in your review tool that says “Generate draft” or “View AI insights.”

The key point here is not to suggest that everyone will have AI-written reviews next year. Instead, it’s to emphasize that this option is now available as a product feature and a decision for management. Eventually, someone in your leadership will need to answer a crucial question: if we can use this technology, should we? And if so, where do we set the boundaries?

What Helpful AI Might Look Like

AI isn’t automatically a bad thing for performance reviews. It can help reduce tedious work without undermining managers. However, this requires more than just vague ideas like “human in the loop.”

One good approach is to let AI assist in gathering evidence but not take over the storytelling. AI can be useful for compiling project lists, metrics, peer feedback, and notes into an easy-to-review format. It can also highlight patterns that might be missed, such as a series of missed deadlines or consistent positive customer feedback. However, the actual narrative of the review, the explanation of what matters, should still come from the manager in their own words, including specific examples. This keeps the important work of interpretation and synthesis with the human manager.

Another key point is to treat any AI-generated labels as inputs, not final decisions. If a company uses tools that label employees as “high potential” or flag risks, it should be clear that no label alone can decide outcomes. Managers should explain, in simple terms, why they agree with the system’s assessment or why they choose to ignore it. This approach doesn’t eliminate bias, but it ensures that accountability lies with people, rather than pretending the AI made the tough choices.

Finally, if AI makes drafting easier, the time saved should be used for giving feedback, not just creating more dashboards. Organizations are already investing in training on how to use AI at work. They also need to train managers on having difficult conversations, giving clear and specific feedback, and recognizing when the AI system is wrong. Without this, AI can reduce administrative tasks but also risk weakening the very managers whose judgment employees depend on.

The uncomfortable truth is that the biggest risk with AI-written reviews isn’t that an algorithm dislikes you. It’s that the person who signs your review may become less capable of doing what truly matters: understanding your work and advocating for you.

JPMorgan’s chatbot illustrates this trade-off on a large scale. Now, every company must decide whether to use AI to support human judgment in reviews or let it quietly replace that judgment.

YR
Y. Anush Reddy

Y. Anush Reddy is a contributor to this blog.