How to Choose a University in the Age of AI

There are two dominant models of how a young person can be educated in an AI-saturated world. They disagree about where competence comes from, what “real understanding” means, and what kind of work school is meant to prepare you for.

The Bottom-Up model begins with friction. Taste and judgment are earned through contact with difficulty. You do not get reliable intuition without wrestling with the parts. You do not get first principles without feeling the limits of your own thinking. You do not develop discrimination without seeing your own errors clearly and repeatedly.

In this model, struggle is not a side effect. It is the mechanism. Solving equations by hand. Writing essays without assistance. Memorizing fundamentals. Recalling from memory rather than looking things up. Making mistakes you cannot outsource. These are not rituals. They are how the internal map is built.

The logic is straightforward. Verification requires a felt sense of reality. You cannot evaluate a structural beam if you have never carried weight. You might be able to repeat the rules. You might even speak fluently about safety margins. But without contact, the instinct that something is off does not form.

A purely Bottom-Up education may not disappear. It may become rare. In a world saturated with silicon, the institutions that forbid it could market themselves as cognitive monasteries. Places where the friction is intentional, the memory work is protected, and the struggle is the product. That kind of education could become a luxury good, chosen by families who can afford to delay tool leverage in exchange for depth and insulation.

The Top-Down model starts elsewhere. It accepts that systems now exist which can generate competent work on demand. If that is true, then the sequence changes. The goal shifts from producing outputs to understanding structures.

Here, students learn by reverse engineering. Instead of laying every brick, they are handed a finished wall. Their job is to interrogate it, stress it, take it apart, and rebuild it in their own reasoning until they can explain why it stands. The work is explanation, critique, calibration, and defense. The finished product is no longer the proof of learning. The explanation is.

The logic here is also plain. If the world is going to be built with powerful tools, then advantage lies in design, debugging, constraint selection, and tradeoff management. The student’s leverage is not in repeating narrow tasks but in evaluating systems that perform those tasks at scale.

Those are the poles. Most universities will not live at either extreme. They will land somewhere in between, protecting certain foundations while integrating AI where leverage matters. The real question is not philosophical. It is operational.

Who carries the burden of adaptation?

You can answer that question while you are still shopping. Ignore the slogans and look for the signals.

Start with what the school publishes. Look for an AI policy that reads like a real document, not a warning poster. A serious policy names categories of use with examples. It distinguishes between outlining, drafting, editing, research support, coding assistance, and final submission. It explains what must be disclosed and what is prohibited. If the policy is vague, enforcement becomes arbitrary. If it is only prohibition, the burden is being pushed onto the student.

Then look at the course catalog, not the marketing site. Do you see AI mentioned outside computer science? Writing. Business. Biology. Economics. Philosophy. Design. If AI appears only in one corner, the institution is treating it as a specialty topic rather than a general instrument.

Next, look for structure that teaches verification. This can show up in small ways: library workshops, required research methods courses, writing center guidance, or published rubrics that emphasize sources and reasoning. When a school is serious about AI, it becomes serious about evidence, because evidence is what keeps AI useful without letting it quietly fabricate.

Ask a simple question during a tour or an admitted-student event. “In first-year writing, what is the AI policy?” You are not looking for the “right” answer. You are looking for a crisp answer. If the staff or faculty member hesitates, contradicts someone else, or falls back to moral language, that is a signal the campus has not operationalized its position.

Email two professors you might actually take, one in the humanities and one in STEM. Ask: “How do you handle AI use in your class? What is allowed, what is not, and what do you want students to learn from the restrictions?” The speed and specificity of the reply will tell you more than a glossy brochure. Pay attention to tone, too. If the response feels irritated, evasive, or moralizing, you are seeing fear or fatigue. If it feels crisp, practical, and even a little interested, you are seeing competence and ownership.

Ask current students a concrete scenario question. “If I use AI to outline a paper, and I disclose it, what happens?” Then ask the mirror version: “If I use it and do not disclose it, what happens?” You are listening for whether the system is designed for honesty or designed for cat-and-mouse.

Look for assignments that force explanation, not just output. You can ask this directly: “Do students ever have to defend their reasoning in writing or out loud?” If the answer is yes, AI becomes less of a shortcut, because the student still has to own the logic. If the answer is no, the environment will reward whoever can produce polished artifacts fastest.

Pay attention to how the school handles foundations. In a hybrid model, certain courses will explicitly protect internalization. You may hear phrases like “no-tool exams,” “handwritten problem sets,” “closed-book proofs,” or “in-class writing.” That is not anti-AI by itself. It can be a deliberate choice about what must be carried in the student’s head.

Also look for the other half. Does the school provide any structured AI practice at all? Not a single orientation lecture, but repeated reps. Workshops. Office hours. Example prompts and example failures. Guidance on disclosure. Guidance on verification. If the only message is “don’t cheat,” students will still use the tools, and they will learn the hard parts alone.

Finally, listen for whether the adults can speak from exposure. Not hype. Not panic. Simple competence. Can they name benefits and failure modes without changing the subject? Can they explain what they are trying to protect, and what they are trying to accelerate?

These signals reveal the path long before you sit in a classroom. They tell you whether the institution is carrying the burden of adaptation, or quietly handing it to the student.

Next
Next

The Self-Sorting Silo