Skip to content

To put the world in order, we must first cultivate our personal life; we must first set our hearts right.

Da Xue (The Great Learning)
Chapter 9

The Personal Center of Excellence

Six pillars — Strategy, Governance, Talent, Technology, Data, Ethics — translated from Fortune 500 architecture to one human life at 1/5,000th the cost.

Chapter 9 — The Personal Center of Excellence

To put the world in order,
we must first put the nation in order;
to put the nation in order,
we must first put the family in order;
to put the family in order,
we must first cultivate our personal life;
we must first set our hearts right.

Da Xue (The Great Learning), attributed to Confucius / Zengzi (c. 500 BCE)1

As within, so without; as above, so below.

Tabula Smaragdina, Hermetic tradition (c. 8th c. CE)2


The boardroom and the kitchen

In any given month I run two kinds of meeting that, on the surface, could not look more different.

The first is held in glass-walled rooms in cities I am visiting for the day — Madrid, Brussels, Amsterdam, Frankfurt, Riyadh — with the executive committees of large banks, telecoms, energy companies, and hospital systems. The conversation is about how to deploy AI across an organization of fifty thousand people in a way that is governed, ethical, valuable, and durable. The standard framework I and my colleagues at the Oracle EMEA AI Center of Excellence work with has six pillars. We will get to them in a minute.

The second meeting is held at my kitchen table, with a notebook, often on Sunday morning, with a strong coffee and the quiet of an apartment whose other inhabitants have not yet woken up. The conversation is with myself, on paper. The question is the same one the executive committees are asking, scaled to one human life: how do I deploy AI across the organization that is me, in a way that is governed, ethical, valuable, and durable?

I used to think these two registers were unrelated. The boardroom was professional life. The kitchen was personal life. The framework I was teaching at the boardroom was for organizations with budgets and headcount and compliance officers, not for me on a Sunday with a coffee.

Three years into this, I no longer believe that. The architecture transposes almost without modification. The six pillars work at both scales. The reason they work is that the underlying structure of deploying intelligence well is invariant to scale. A Fortune 500 needs Strategy, Governance, Talent, Technology, Data, and Ethics. So does a person. The cost of running the personal version is not 1/100 of the corporate version. It is closer to 1/5,000.

This is the most consequential framework I have stumbled into in the past five years, and it is the one I have built the public-facing parts of frankx.ai around. The framework is free. The implementation is open. The results, in the lives of people who have applied it, are not subtle.

This chapter is the architecture.


The six pillars, briefly

Before walking each one carefully, here are the six. I want them on a single page so you can see the whole shape before we open each room.

Pillar Question The corporate version The personal version
Strategy What is AI for? Aligned with business strategy, OKRs, market position Aligned with life strategy, a north star, the work you are here to do
Governance Who decides what we will and will not do with AI? Risk committees, audit, policy, compliance Personal policies, refusal lists, review cadences
Talent Who has the skills? AI engineers, data scientists, business translators Your own skill development; the agents and collaborators you cultivate
Technology What tools do we run? Models, platforms, MLOps, infra, security Your laptop, models, agents, tools, the integrations you maintain
Data What is the substrate the AI thinks with? Data lake, feature stores, knowledge bases, governance Your notes, journals, reading, captured insights, the corpus that is you
Ethics What kind of impact will we have? Responsible AI, bias, fairness, harm, customers, society What you will and won't do; truth-telling; effect on the people you love

Each pillar exists in a healthy enterprise CoE because the enterprise has learned, often through expensive mistakes, that AI deployed without it is dangerous, wasted, or both.

Each pillar exists, or should exist, in a healthy personal CoE because the same logic applies at one one-thousandth of the cost. The mistakes scale down with the budget. The damage of unmanaged AI is not zero in a single life. It is just less expensive.

I will walk each pillar carefully, with the corporate practice first and the personal translation second.


Pillar 1 — Strategy

What is AI for?

In a corporation, this is the first question anyone serious will ask before approving an AI initiative. What problem are we solving? What is the value we expect? How does this advance the strategy of the business? If the answer is because everyone else is doing it, the conversation ends and the initiative dies. The CoE's first job is to enforce the discipline that AI work serves the strategy and not the reverse.

In a life, the same question is asked too rarely.

Most people, faced with the explosion of AI tools, default to a fragmented use pattern — they use ChatGPT to draft emails, Midjourney for occasional illustrations, a code assistant when they remember it exists, and a music model the one time they tried it. There is no integrating purpose. The AI usage is a collection of micro-tactics in service of no strategy.

The personal Strategy pillar asks: what is AI for, in my life specifically?

The answer should be specific, written down, and revisited. It might be: AI is for letting me write the three books I have been postponing. It might be: AI is for running my one-person business at the operational level of a five-person business so I can keep my evenings. It might be: AI is for compressing the technical learning curve in my new field by a factor of three. It might be: AI is for the arts I want to make in the time my children are small, when I cannot afford the years of slow learning.

When the strategy is clear, every tactical question — should I learn this tool, subscribe to this service, take this course? — becomes answerable by reference to the strategy. When the strategy is unclear, every tactical question becomes a small fresh anxiety.

The Strategy pillar costs one Sunday morning every quarter. You sit down with a notebook and you write three lines: what is AI for, in my life, this quarter? You revisit it ninety days later. You revise it.

That is it. That is the entire pillar.

The cost of skipping it is everything downstream. The cost of running it is forty minutes, four times a year.


Pillar 2 — Governance

Who decides what we will and will not do with AI?

In a corporation, governance is the architecture of decision-making. Risk committees set policies. Compliance audits use cases. Security signs off on data flows. Legal reviews the procurement contracts. The CoE's role is to harmonize all of this into a workable, durable framework that lets the organization move fast on the right things and refuse the wrong ones.

In a life, the same architecture, in a smaller form, is the difference between a coherent operator and a scattered consumer.

Personal governance is the set of policies you have already decided about your own AI use, so that you do not have to re-decide them in the moment when you are tired and a tool is offering you something convenient.

Your personal governance should answer at least these:

What models will I use? For what? (e.g. Claude for code, GPT for drafting, Gemini for vision tasks, Suno for music. I will not chase every new release; I will revisit my stack quarterly.)

What will I refuse to use AI for? The refusal list is the most important part of personal governance. Mine includes: condolence messages, apologies to people I love, my children's homework, decisions that affect another person's livelihood. The category is not things AI can't do — these are things it can do — it is things I want to do as a human, with whatever cost in time and clumsiness, because the doing is the point.

Where does my data go? What I will paste into a model, what I will not, what I will put into local-only systems. This is the personal version of corporate data classification.

What is my approval cadence for new tools? Not every new release deserves your attention. A monthly review of one new tool, with a serious test of whether it earns a place in your stack, beats a daily flit between novelties.

The discipline of governance is the discipline of deciding once and obeying many times. You write the policy when you are clear and rested. You execute it on the days when you are tired and the tool is shiny. The policy holds.

The cost of the personal Governance pillar is one notebook page, revisited quarterly. The cost of skipping it is a steady drift toward a posture of using AI for everything, including the things that should remain hand-made — and the slow erosion of agency that follows.


Pillar 3 — Talent

Who has the skills?

In a corporation, talent is plural — the AI engineers, the data scientists, the business translators, the change agents, the executive sponsors. The CoE's role is to attract, develop, and deploy this talent, and increasingly to integrate human talent with the agentic systems that are now collaborators in the work.

In a life, talent has two faces.

The first is your own skill development. The disciplines of Chapter 8 — read deeply, suffer specifically, work in one domain for ten years, practice attention as a craft, develop taste, live a full life — are the personal Talent pillar. You are the AI engineer in your own enterprise, and the substrate you bring is the substrate the model has to amplify. Your reading list, your apprenticeship to a craft, your willingness to do the hard thinking yourself before deferring to a model — these are the talent budget of your life.

The second face, increasingly, is the network of agents and collaborators you cultivate.

In 2026, a serious operator's stack has grown beyond a single chat window. There are sub-agents that handle specific tasks. There are custom assistants tuned for particular domains — coding, music, writing, research, family logistics, health. There are scheduled agents that run weekly or monthly and produce reports, summaries, drafts. There are human collaborators — editors, mentors, peers — whose judgement you have come to trust.

The personal Talent pillar asks: what is the team around me, including the agentic part of it, and how is it being developed?

A useful exercise once a year: write the org chart of your life. Who are the humans you turn to for which kinds of work? Which agents do you delegate which tasks to? Which roles are unfilled? Where is the bottleneck — is it that you don't have a good editor, or that you don't have a good code review process, or that your music workflow has no QA step?

The personal CoE treats its own talent pipeline with the same seriousness a corporation treats hers. The org of one is still an org.


Pillar 4 — Technology

What tools do we run?

In a corporation, the technology pillar is the largest budget. Data platforms, model providers, MLOps tooling, security infrastructure, the enterprise integrations that let any of it touch the actual business. The CoE's role is to keep the architecture coherent and to prevent the proliferation of incompatible tools that are the curse of every enterprise software estate.

In a life, technology is smaller and more interesting.

Your personal technology stack — for AI specifically — should be deliberately narrow.

The mistake most people make is promiscuous adoption. They subscribe to every new model, install every new app, and keep none of them long enough to develop fluency. The result is a stack of half-learned tools and a slow background sense that one is falling behind despite spending more on AI subscriptions every month.

The right move is the opposite. Pick a small, durable stack. Develop deep fluency. Replace components only on a clear case.

A reasonable personal stack in 2026 — and I will date this so the reader can adjust as the field moves — looks something like this:

  • One frontier text model for general work (Claude, GPT-4, or Gemini)
  • One frontier coding agent (Claude Code, Cursor, Codex CLI) if you write code
  • One image model (Nano Banana 2, Midjourney, or similar)
  • One video model if you do video work (Veo, Runway, Sora)
  • One music model if you make music (Suno, Udio)
  • One transcription model for audio capture (Whisper or similar)
  • A note-taking system that holds your context across sessions (Obsidian, Notion, plain markdown)
  • A scheduling/agentic layer if you have crossed into multi-step automation (Claude Agent SDK, OpenAI Agents, or a hand-rolled stack)

That is the working set. Eight to ten tools, used deeply, replaced quarterly only when there is a clear delta.

The hardware question is mostly settled. A reasonable laptop. A reasonable internet connection. Headphones. A notebook by the bed. The hardware is not the bottleneck; the operator is.

The technology pillar's discipline is the discipline of running a small, fluent stack. The cost is the small ongoing tax of tool maintenance — credentials, subscriptions, the occasional swap. The benefit is fluency, which compounds over years.


Pillar 5 — Data

What is the substrate the AI thinks with?

In a corporation, data is the moat and the constraint. AI models are commodities; what is yours is your data. The CoE invests heavily in data quality, governance, lineage, retrieval architectures, and the engineering that lets models actually use what the company has.

In a life, the analog is what I think of as your corpus — the collected substrate of who you are, written down, indexed, accessible.

Most people's personal corpus is scattered, partial, and inaccessible to themselves and to any model that might help them. Notes in seven different apps. Insights in journals stored in a closet. The reading they did three years ago that they cannot remember the titles of. The drafts of three projects abandoned at different points across different drives. The result is that when they sit down with a model, they can prompt only from what is currently in their head — which, on a tired Tuesday, is a small subset of what they actually know.

A serious personal CoE treats its corpus the way a serious enterprise treats its data lake.

The minimum: a single, durable note-taking system that you have used for at least a year. Markdown is best because it survives format wars; tools change, plain text persists. Inside it, structured spaces — daily notes, project notes, reading notes, morning catches (Chapter 7), reference materials. Indexed. Searchable. Versioned, if you can manage it.

The middle: a vector index over the corpus, so that a model can retrieve from it. This used to be exotic; in 2026 it is a weekend setup with off-the-shelf tools. Once it exists, every interaction with a model can be augmented with retrieval from your own substrate. The model is no longer talking to the average internet; it is talking to you, mediated through your own writing.

The advanced: a small, custom-trained or fine-tuned model that has internalized your voice and your knowledge. This is not necessary for most people. It is becoming, however, accessible enough that the most serious operators are starting to do it.

The cost of building the corpus is paid daily, in three minutes — capture what you read, capture what you think, capture what you make. The cost of not building it is paid daily as well, in lost reuse: the insight you had three years ago that is no longer available to you when you need it.

The Data pillar is the most under-rated of the six. It is also the pillar that, in my experience, distinguishes the people whose AI use compounds across years from the people whose AI use plateaus in months.


Pillar 6 — Ethics

What kind of impact will we have?

In a corporation, ethics has become a serious topic over the past decade, partly because of regulators, partly because of consumers, partly because the field has begun to mature into the recognition that AI deployed badly damages real people. Responsible AI programs, fairness audits, harm reviews — these are now standard parts of a credible enterprise practice.

In a life, ethics is not a department. It is the way the other five pillars are run.

The personal Ethics pillar asks at least:

Whom am I affecting with my AI use? Drafts of a message to a friend are not just a productivity tactic; they reach the friend. Generated music played to your child is part of your child's musical formation. AI-mediated decisions in a workplace touch people who did not opt into the mediation.

Am I telling the truth? Generative AI makes it cheap to produce work that looks like effort. The temptation to publish what you did not write, claim what you did not earn, present at scale what you did not understand at depth — this temptation is structural. The discipline of telling the truth about what is yours, what is the model's, and what is the collaboration is one of the central ethical disciplines of the new era.

Am I serving people or extracting from them? This is the deepest question, and I will not pretend to have a clean rule for it. It is the question I revisit most often, and it is the question that shapes more of my refusal list than any other.

What kind of person is this making me? A use-case that is technically valuable, financially rewarding, and personally corrosive is still a bad use case. The personal CoE has the luxury, that the corporation does not, of measuring success on the substrate of who you are becoming.

The Ethics pillar is not a rule. It is a question, asked often, in the company of one's own conscience. The answer changes. The asking does not.


A Sunday morning

Once a quarter, on a Sunday I block out for it, I run my personal CoE review.

It takes ninety minutes, with a coffee. I open a single note that holds the last review. I walk the six pillars in order. Strategy: is the north star current? Governance: any new policies needed? Any policies that have failed? Talent: am I learning what I need to learn? What is missing? Technology: is the stack still right? What needs to be replaced? What needs to be added? Data: is the corpus growing? Are there gaps? Ethics: what am I uncomfortable with? What needs to stop?

I write three lines on each. Sometimes a single line is the whole entry. Sometimes a pillar generates a list of small tasks for the next ninety days.

The whole review fits on two pages. The cost is ninety minutes per quarter — six hours a year. The benefit is that I am running my life with the same architectural intentionality that the Fortune 500 boards I work with run their AI deployments.

The bargain is comical, when you think about it. The architecture that costs Oracle's largest clients eight-figure investments to implement at scale is, for a person, a notebook and ninety minutes a quarter.

This is the gift the framework gives. It is a framework borrowed up — borrowed from the place where the stakes are largest and the lessons hardest-won, and applied to the place where the stakes are most personal.


The democratization

I have been carrying this framework around for years now, in two registers. The professional register pays my rent and lets me work with colleagues whose excellence I respect. The personal register has, over time, quietly reshaped my own life.

A few years ago I realized something simple. The framework is open. There is no proprietary content. The six pillars are not Oracle's invention; they are the accumulated wisdom of the field. The corporate version is expensive only because corporations are expensive. The personal version is free.

I started writing the personal version up in public, on frankx.ai. Not as a product, not as a course, not as a hook for a paid program — just as the framework, as I run it, written down so anyone could use it.

The response surprised me. People are hungry for an architecture for their AI use. The sense that AI is happening to them, rather than being deployed by them, is widespread enough that even a small dose of architectural thinking is welcome. People who have read the personal CoE writeup and applied it have written back to say things like I finally feel like I am in charge of my own AI life. That is, structurally, what this framework is for.

The democratization is the point.

What enterprises have, individuals can have. What boards run, kitchen tables can run. What costs ten million dollars in one register costs ninety minutes a quarter in another.

This is one of the small, real gifts of the Golden Age. The intellectual frameworks that previously belonged only to the largest organizations are, for the first time, freely available and usable by anyone with the discipline to apply them.

The framework is a gift. The application is the work.


Hand-off

The personal Center of Excellence is the architecture. The architecture exists to serve the work. The work, for those of us who have decided that creation is part of how we live, is the making of things.

The next chapter opens the room I have spent the most time in.

Twelve thousand songs. Three years. The shape of the creator's life when the constraint of execution drops away and the constraint of taste becomes everything.

In Chapter 10 we walk into the Renaissance — the second one, the one happening now, the one that is happening to anyone willing to receive it.

The Lenovo is open. The studio is quiet.

We are walking in.



Footnotes

  1. Da Xue (大學), the Great Learning, traditionally attributed to Confucius's disciple Zengzi, c. 500 BCE. Translation adapted from James Legge.
  2. The Tabula Smaragdina or Emerald Tablet is a foundational text of Western Hermeticism, of disputed origin but extant in Arabic by the 8th century CE. The phrase as above, so below is its most enduring contribution to Western esoteric thought.