top of page

Rethinking ePortfolio Assessment in the Age of AI: The 3P Framework

  • Nov 5
  • 3 min read
How educators can move beyond policing AI to designing assessments that promote authentic learning

On September, 2025, Jobs and Skills Australia (JSA) released a interesting report on generative AI transition. It found that AI is already automating key entry-level tasks in fields like law, health, and creative industries—the very tasks we've traditionally used to build foundational skills in our graduates.


This creates a critical gap. How do we assess competence when we can no longer be sure who, or what, created the work?

This isn't a future problem; it's here now. In that same JSA report, a case study on the University of Sydney highlighted their solution: allowing students to use AI, but requiring them to "document and reflect on their process."

This confirms what many of us have felt: we must pivot from assessing the product to assessing the journey.


The Problem with 'Policing' AI

As researchers and educators, we've all seen AI fabricate sophisticated outputs that imitate what Rudolph et al. (2023) call the "surface features" of learning.

Our initial institutional response has often been to police this. But as scholars like Bearman & Ajjawi (2023) have argued, this is a flawed, 'black box' approach. AI detectors are unreliable and, as Liang et al. (2023) have shown, demonstrably biased against non-native English writers.

It's an unwinnable and inequitable arms race.


The 3P Framework: A Pedagogical Pivot

In a paper I've just presented at the 2025 ePortfolio Forum, I propose the 'Process, Provenance, and Persona' (3P) framework as a response to this dilemma.

It's not a rigid rubric. It's a set of complementary lenses to help us design assessments that value the parts of learning that cannot be outsourced to a machine.

Here's how it works:

1. ⚙️ PROCESS (The 'How')

Instead of just the final product, we value the messy, iterative journey. We assess the "learning in motion."

  • What it looks like: Requiring students to submit version histories, drafts, evidence of feedback integration, and (critically) annotated AI prompts showing how they collaborated with the tool and why they made certain decisions.

2. 🔍 PROVENANCE (The 'Where')

This is about the traceable lineage of ideas. We value a student's ability to be transparent about all their sources—both human and machine.

  • What it looks like: Moving beyond simple citation to require a concise "AI-Use Statement" (a practice Nature and other top journals now demand). This statement discloses which tools were used and for what purpose (e.g., "I used AI to brainstorm, but not to write my reflection").

3. 👤 PERSONA (The 'Who' & 'Why')

This is the "irreducibly human" pillar. It's the student's unique, critical, reflective voice, which AI simply cannot fake.

  • What it looks like: Assessing reflective vignettes on choices and mistakes, or forward-looking action plans. This draws on decades of scholarship on reflective practice (e.g., Moon, 2006) and assesses a student's metacognitive growth and their ability to connect learning to their own values and professional identity.


What This Looks Like in Practice

Imagine a six-week ePortfolio project. Instead of grading only the final artefacts, we shift the assessment weighting:

  • PROCESS (35%): Assesses version history, justifications for AI use, and feedback integration.

  • PROVENANCE (35%): Assesses accurate citations, a clear AI-use statement, and a "source trail."

  • PERSONA (30%): Assesses the reflective commentary for a distinctive voice and critical self-evaluation.

This rebalances the assessment to value the entire learning journey, making it far more resilient to AI.


The Reality Check: It's About Workload

This is not 'set and forget'. The most significant barrier to this approach is, of course, workload. As the JSA report notes, this kind of authentic engagement is time-intensive.

The solution must be strategic:

  1. Go Programmatic: Scaffold these 3P skills from the first year to the capstone, rather than in one course.

  2. Provide Clear Guidance: Use templates for AI logs and exemplars of good reflection to reduce cognitive load for both students and staff.

  3. Be Realistic: Adjust evidence requirements to focus on key outcomes, not exhaustive documentation.


Stop Policing, Start Designing

Generative AI isn't a problem to be solved; it's a new reality to which we must adapt. It challenges us to stop policing products and, instead, start designing assessments that value the human-led process.

This is how we defend academic integrity and prepare our students for a future where they will, without a doubt, be working with AI.


I've just presented this framework at the 2025 ePortfolio Forum. How is your institution handling this challenge? I'd love to hear your thoughts in the comments.


 
 
 

Comments


bottom of page