Column

Notes to a Young AI Professional: On Speed, Status, and Sanity

Those familiar with my writing will know that I usually write about artificial intelligence in terms of regulation, governance, and risk. This piece is a slight departure. What follows is a set of reflections for young professionals working in AI, or considering work in the field, at a moment when the pace of change, the visibility of the space, and the pressure to find one’s place in it can easily create more anxiety than clarity. I use the phrase “young professional” in a broad sense. It follows a familiar tradition in reflective writing, but I do not mean it strictly by age. In a field like AI, a person can be well established in one profession and still quite new to the work of finding their place in this one.

Part of the reason I wanted to write this piece is that I increasingly think one of the risks in AI is not only model failure, weak governance, or poor regulation. It is also the possibility that we scare away or burn out the people trying to work responsibly in this field. That may sound like a softer concern than the others, but I do not think it is any less real. A professional environment that rewards urgency, overcommitment, and constant public repositioning can take a considerable toll on the people inside it. Even by the standards of emerging fields, AI feels unusually intense. The pace is faster, the visibility is greater, and the pressure to position oneself convincingly is unusually strong.

Lawyers, of course, are not strangers to that problem. Mental health challenges in the legal profession have been with us for a long time, and they do not disappear simply because the subject matter becomes more modern or more exciting. I do not say that from a comfortable distance. Some years ago, I had my own burnout moment and learned, more forcefully than I would have preferred, that professional intensity has limits. That experience left me more attentive to the role of restraint in building a sustainable working life.

I offer these reflections with some humility. AI is moving too quickly, and the world around it remains too unsettled, for complete certainty to be a particularly convincing professional pose. Much of this field is still being built in real time, and most of us are learning while we work. What follows, then, is simply a set of notes for those trying to orient themselves in a professional environment that often creates more confusion than clarity.

You are probably not as behind as you feel

One of the defining features of the current AI moment is speed. New tools arrive constantly. New reports circulate almost weekly. New conferences, institutes, advisory groups, credentials, and public statements continue to proliferate. It is very easy to look at the landscape and conclude that one is already late.

That feeling, in my view, is often misleading. Part of the difficulty is that AI is not only moving quickly, but moving quickly in public. The field generates a constant stream of visible activity, much of it presented with confidence and urgency. You are not simply trying to understand a changing area of work. You are also exposed to a steady flow of signals suggesting that others are understanding it faster, speaking about it more fluently, and positioning themselves more effectively.

This can create a persistent and unnecessary sense of professional inadequacy. In reality, many people are still trying to find their footing. They may sound certain, but certainty and clarity are not always the same thing. A fair amount of what looks like settled expertise is still early experimentation, provisional positioning, or an understandable attempt not to appear uncertain in a field that does not reward hesitation.

There is, in other words, an important difference between being late and simply being thoughtful. The latter may look slower from the outside, but it is often more durable in the long run.

Public signals often exaggerate actual adoption

Public signals about AI adoption tend toward the optimistic. Professional networks and industry forums present a picture in which organizations are already using AI in sophisticated, embedded, and highly strategic ways, and in which practitioners have rapidly developed genuine expertise across a wide range of emerging roles. That picture is understandable. In a competitive and fast-moving field, there are real incentives to present progress confidently, and most people doing so are responding reasonably to the pressures around them.

Experience suggests a more restrained reality. AI adoption remains uneven. In some organizations, the tools are being used actively and with real effect. In many others, however, use is still tentative, fragmented, informal, or relatively shallow. Sometimes a small number of employees are experimenting while senior leadership remains uncertain. In other cases, a formal AI strategy has been announced even though the organization is still struggling with the basics of procurement, data governance, staff training, and oversight. In still others, there is considerable enthusiasm at the rhetorical level but very little disciplined operational integration.

This gap between public performance and institutional reality matters because it affects how professionals understand their own progress. If one is constantly comparing oneself to a curated picture of universal adoption, one may begin to assume that one is missing something fundamental. Often, that is not the case. Often, what one is seeing is a mixture of aspiration, selective disclosure, and the ordinary tendency to present progress more confidently than it is actually being achieved.

For younger professionals in particular, this point is worth remembering. Visibility is not the same as substance. Public alignment with AI is not always evidence of deep capability or careful judgment.

Governance around AI is less mature than it appears

A similar dynamic exists in the governance world. If one were to judge by public discussion alone, one might think that AI governance is already highly developed. The vocabulary has expanded quickly and, in many respects, usefully. Yet much of the actual governance work remains underdeveloped.

This is not to say that no serious work is being done. Quite the opposite. Many thoughtful people are trying to build credible governance systems under difficult conditions. But it is important to be honest about how early much of this work still is. The language of maturity should not be confused with maturity itself.

For professionals entering the field, that can actually be reassuring. If the governance landscape feels unsettled, that is often because it is unsettled. You are not failing to perceive a stable system that everyone else already understands. More often, you are seeing the truth of the situation.

The scramble for credentials and relevance is real, but it should not govern your life

Another feature of the current environment is the scramble for position. New affiliations appear quickly. Titles evolve quickly. Invitations matter. People understandably seek ways to locate themselves within a growing field and to communicate relevance to employers, clients, institutions, and peers.

Some of this is legitimate. People do change their work in response to important developments. They acquire new knowledge, develop new practices, and build new expertise. There is nothing inherently suspect about that. At the same time, one should understand that AI has also created a strong incentive for professional relabelling. A person can easily begin to feel that everyone else is accumulating credentials, invitations, and designations at an impossible pace.

This can create its own form of anxiety. Why was I not invited to that event? Why am I not on that panel? Why have I not yet joined that network or completed that program? Why does everyone else appear to be accelerating while I am still trying to do careful work?

Some version of those questions will be familiar to many professionals in this space, and they are ones that I have asked myself throughout my career. But over time I have become less persuaded that frantic accumulation is the right response. In fields like this, a reputation built too quickly can become fragile just as quickly. In the long run, careful work, sound judgment, and an identifiable area of contribution matter more than trying to appear everywhere at once.

This is also one reason that a willingness to be wrong matters so much. It keeps a person from becoming overly invested in performance. It leaves room for learning. And it provides at least some protection against the temptation to confuse visibility with substance.

You may need to choose a narrower lane than the field encourages

When I first began working more intensively around AI, I found myself pulled in many directions at once. That is partly the nature of a fast-moving field. Opportunities emerge quickly. Requests multiply. Everything seems important. The temptation is to say yes broadly, particularly if one is trying to establish a place in the conversation.

There can be value in that at the beginning. It can help a person understand the landscape, identify where real needs exist, and determine where one’s experience is most useful. But I have come to think that remaining in that posture for too long carries real costs.

At some point, for reasons of both strategy and sanity, it may become necessary to narrow one’s scope. In my own case, I have increasingly limited my work to governance, risk, and compliance related matters, together with board-facing work. That has not reflected a lack of interest in the wider AI field. Rather, it has reflected a growing sense that one cannot do serious, sustainable work while trying to respond to every opportunity that presents itself. Some selectivity is not a retreat from ambition. It is often what allows professional judgment to remain intact.

For younger professionals, this may be one of the harder lessons. The fear of missing out is real. In AI especially, it is easy to feel that every invitation declined is a door closing. But a career cannot be built on perpetual overextension. It is entirely possible that the best thing you can do in this environment is not to cover the whole field, but to identify the part of it where your skills, values, and temperament are best suited.

Calm is underrated

The final note is perhaps the simplest. In a field shaped by noise, speed, and visible ambition, calm has become an underrated professional quality.

By calm, I do not mean passivity or indifference. I mean the ability to remain measured in a setting that rewards urgency, to maintain perspective when others are performing certainty, and to keep one’s professional identity from being reorganized every time a new model, announcement, or institutional initiative appears. That kind of steadiness is not glamorous, but it is useful. It supports better judgment. It also makes a professional life more livable.

There is much in AI that is genuinely important and exciting. There are real opportunities here for meaningful work, especially for those who want to contribute to governance, accountability, and the responsible shaping of institutions. But those rewards are more likely to endure if one learns early that judgment matters more than speed, that visibility can mislead, and that restraint is sometimes the wiser part of professional seriousness.

If these notes have any common theme, it is simply this: the AI field generates a great many false signals. It can make thoughtful people feel late, peripheral, and underprepared even when they are none of those things. It can also encourage a style of working that is difficult to sustain and, in some cases, harmful. That too belongs within the conversation about AI risk.

Note: Generative AI was used in the preparation of this article.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)