Framework For AI Integration In Learning Design

Photo of author

By news.saerio.com

Framework For AI Integration In Learning Design

How I Synthesized 16 ID Frameworks Into One

Back in January, I discovered that AI and I were doing each other’s jobs. I was doing clerical work. Formatting, document structuring, and lots of copy-and-pasting. Claude was doing creative work. Being asked to make judgment calls, to speculate on the emotional impact of content, and to draw insights from lived experiences.

On top of that, I wasn’t sure how to ask AI to check over what it was producing. Sweep over everything using an ADDIE lens and tell me where the alignments were off? Start smaller with just checking that it followed Bloom’s Taxonomy when it wrote the objectives? Wait, don’t forget about Universal Design for Learning and accessibility checks…

After a few weeks, these consistent moments of confusion and frustration made me step back and ask myself a question:

Is there an evidence-based framework that guides Instructional Designers on how to integrate AI with Instructional Design best practices throughout the course development process?

Not a framework for teaching students about AI. Not a list of AI tools or prompts to try. An actual methodology for Instructional Designers to use while designing and building courses with AI.

I went looking, and what I found was a gap.

What Existed And What Didn’t

The Instructional Design field has no shortage of frameworks from over the years. ADDIE, SAM, Action Mapping, Bloom’s Taxonomy. All easily recognizable, and for good reason. They work.

In addition, the AI-integrated teaching and learning field is developing more and more every day. We now have frameworks for teaching with AI in the classroom, guidelines for students using AI tools ethically, and methods for learning AI literacy.

As far as the AI-integrated learning design field goes, there has been some great progress made. Adaptations of ADDIE that add AI tools to existing phases. Content generation frameworks like GAIDE (Generative AI for Instructional Development and Education).

However, I found there was still a gap. IDs were missing a systematic methodology that could help us decide when to use AI versus when to stay human throughout the entire development process. We were missing a framework that would give AI a credible, research-based foundation from which to assist us with our work.

16 Frameworks, 50+ Principles, One Problem

I decided to build what I couldn’t find. I started by going back to the frameworks I often felt stuck choosing between. We already trust and use them, I just needed to figure out how we could use the right ones with AI to produce the best results. But which were the right ones?

I identified 16 of them.

Four process frameworks: ADDIE, SAM, Backward Design, and Action Mapping.

Twelve learning science frameworks: Bloom’s Taxonomy, Gagné’s 9 Events of Instruction, Merrill’s First Principles, Cognitive Load Theory, Mayer’s Multimedia Principles, Universal Design for Learning, the ARCS Model, Constructivism, Social Learning Theory (Bandura), Experiential Learning (Kolb), Scaffolding Principles (Vygotsky), and WCAG/Accessibility Standards.

The core canon of Instructional Design methodologies and learning science frameworks.

Together, these 16 frameworks contain over 50 individual principles and guidelines. No wonder we’re all picking and choosing, using only a handful at a time. No Instructional Designer or AI tool can meaningfully apply 50+ principles during the design process.

I noticed many of the principles were overlapping, and that’s the part I found most interesting. Backward Design’s emphasis on starting with objectives echoed Action Mapping’s focus on performance goals. Cognitive Load Theory’s concern with extraneous load connected directly to Mayer’s Coherence Principle. The frameworks weren’t contradicting each other nearly as much as they were repeating the same things in different ways.

The Synthesis: 21 Principles, 5 Phases

Through systematic analysis, I deduplicated those 50+ principles down to 21. Each can be traced back to its single or multiple source methodologies but is completely unique from the others.

When you receive “Your objective says learners will ‘evaluate’ treatment options, but the assessment is multiple choice identification. What would this activity look like if it required evaluation instead? (Principle 5: Match activity to cognitive level),” as feedback, you can see that it draws from Bloom’s Taxonomy (match assessment and activities to cognitive level hierarchy), Backward Design (assessment aligned to objectives at appropriate cognitive demand), and Merrill’s First Principles (assessment requires demonstration at the performance level specified). You get the actionable insight and the academic credibility to back it up.

The 21 principles are also organized across five workflow phases that mirror how Instructional Designers actually work.

Phase 1: Planning

Phase 2: Structure Design

Phase 3: Experience Design

Phase 4: Formatting Design

Phase 5: Review

Each of the 21 principles lives in one specific phase, and a few overlap in multiple phases. This allows designers, and AI, to call on the correct principles at the relevant time, eliminating extraneous cognitive load and allowing for flexibility and adaptability.

Why 21 And Not 50

You may be thinking, “If AI can hold unlimited information, why bother distilling 16 frameworks down to 21 principles? Why not just give AI all 50+ raw principles and let it sort through them?”

The reason is that the framework isn’t just for AI. It’s for the designer.

If you dump 16 raw frameworks into an AI prompt and ask it to review your course module, you’ll get feedback that references Gagné’s Event 4, Merrill’s Application Principle, and Bloom’s Level 3. You’ll have no idea whether those are three separate problems or the same problem described three different ways. You can’t evaluate what the AI is telling you because there is too much information and not enough structure.

The synthesis solves three problems at once. It makes the AI output interpretable by referencing a specific principle so you can actually understand and evaluate your feedback meaningfully. It removes contradictions and redundancy so the same issue isn’t flagged four times from four different methodologies. Lastly, it keeps the designer in the driver’s seat. 21 is a realistic number of principles for a professional ID to begin to recognize and eventually internalize over time. This means you’ll always be in a position to both better understand and better question the AI outputs you receive.

Four Core Philosophies

As I researched and built, four key underlying factors crucial to guiding and checking decision-making popped up in addition to the 21 principles. These became the four core philosophies of my framework.

Resource Vs. Experience

Is this information the learner needs to reference later, or an experience designed to change behavior and build skill? Clarify before building anything. The answer determines format, complexity, interaction level, and everything else afterward. Without this, you overbuild references that should be simple, or under-build experiences that need depth.

Clerical Vs. Creative

Is this task mechanical or does it require human judgment? Let AI handle compliance checking, pattern tracking, and holding frameworks in memory. Let humans handle implementation decisions, SME management, design judgment, and team collaboration. Without this, you waste human energy on tasks AI does better or trust AI with decisions only humans should make. Capability does not equal suitability.

Learner Reality Test

Would the actual person this is designed for, in their real context, with their real constraints, find this usable and valuable? Design for your real audience, not the ideal one. Without this, you build courses that impress designers but frustrate learners.

Evergreen Test

Would this still work if the delivery method changed, if it changed hands or contexts, or if someone needed to update it? Design with evergreen in mind. Without this, you build static learning experiences chained to one tool, one person, or one moment in time.

These philosophies are the decision-making lenses that govern how and when you apply the principles. The principles tell you what to check for. The philosophies tell you how to think about what you’re building before you even get to the principles.

What This Means For The Field

The framework is institution and tool-agnostic by design. It works regardless of your LMS, your organizational context, your specific compliance requirements or your AI tool of choice. Those are implementation details. The principles and philosophies are universal.

I’m currently testing this framework, applying it to actual projects and tracking what happens when design decisions are informed by all 21 principles and four philosophies rather than by a subset I’m able to remember and implement at a given moment.

The early results are promising. Issues that would typically surface during QA review are getting caught during the design phase. Feedback from AI tools is more interpretable because it’s organized around a shared principle set. Perhaps, most importantly, the process feels more intentional. My course materials feel more solid, more cohesive, and more impactful for learners. I feel like my design skills have improved already from getting constant feedback with clear explanations and evidence-based reasoning.

Taking established, trusted learning science and synthesizing it into something systematically accessible through AI collaboration is a strategy this field has been missing. Not replacing what we know with something new, but making what we already know even more useful by leveraging the capabilities of AI. More to come as this continues to grow, but I would say, gap filled.



Source link

Leave a Reply