AISE1_CLASS/Prompting Exercise/ex02_persona_task_data.py

152 lines
5.5 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

"""
Exercise 2 Persona, Task, and Data in a Structured Prompt
============================================================
AISE501 · Prompting in Coding · Spring Semester 2026
Learning goals
--------------
* Use XML tags to separate three prompt concerns: WHO the LLM is,
WHAT it should do, and the DATA it should work with.
* Pass a real Python file as context (RAG-style) inside a <code> tag.
* Iterate on the prompt to extract more specific information.
The file analyze_me.py contains several bugs and style issues.
You will ask the LLM to find and explain them.
Tasks
-----
Part A Build a structured prompt with <persona>, <task>, and <code> tags
and ask the LLM to review analyze_me.py (TODOs 1-4).
Part B Refine the prompt to request a prioritised bug list (TODOs 5-6).
Part C Ask for a corrected version of one specific function (TODO 7).
"""
from pathlib import Path
from server_utils import chat, get_client, print_messages, print_separator
client = get_client()
# Read the file we want the LLM to analyse
code_to_review = Path("analyze_me.py").read_text()
# ── Part A: Persona + Task + Code ─────────────────────────────────────────────
print_separator("Part A Structured Prompt: Persona / Task / Code")
# TODO 1: Fill in the <persona> tag.
# Define a senior Python engineer who is rigorous about correctness
# and follows PEP-8 and best practices.
# TODO 2: Fill in the <task> tag.
# Ask the LLM to review the Python code and identify ALL bugs,
# listing each one with a short explanation of why it is a bug.
# TODO 3: The <code> tag already contains the file — do not change it.
# TODO 4: Build the messages list using only a user message (no system prompt yet).
# Call chat() and print the result.
prompt_a = f"""\
<persona>
You are a Python engineer who is rigorous about correctness and follows PEP-8 and best practices.
</persona>
<task>
Review the Python code and identify ALL bugs, listing each one with a short explanation of why it is a bug.
</task>
<code language="python" filename="analyze_me.py">
{code_to_review}
</code>"""
messages_a = [
{"role": "user", "content": prompt_a}
]
# print_messages(messages_a)
# response_a = chat(client, messages_a)
# print(response_a)
# ── Part B: Refine Ask for a Prioritised Bug List ───────────────────────────
print_separator("Part B Refined Prompt: Prioritised Bug List")
# TODO 5: Extend the <task> from Part A to ask the LLM to:
# - Separate bugs by severity: Critical / Medium / Style
# - For each bug: state the line number, the problem, and a one-line fix hint
#
# Tip: add a <output_format> tag that describes exactly how you want the answer
# structured (plain text for now — we tackle real machine output in Ex 3).
# TODO 6: Build messages_b with a system prompt that reinforces the persona
# and a user message with the refined prompt.
# Call chat() and print the result.
system_b = """\
<request>
<persona>You are a master python developer and teacher</persona>
<style>You follow the PEP 8 style guide</style>
<constraints>Format your response in json</constraints>
</request>
"""
prompt_b = f"""\
<persona>
You are a Python engineer who is rigorous about correctness and follows PEP-8 and best practices.
</persona>
<task>
Review the Python code and identify ALL bugs, listing each one with a short explanation of why it is a bug.
Separate bugs by severity: Critical / Medium / Style
or each bug: state the line number, the problem, and a one-line fix hint
</task>
<output_format>
...
</output_format>
<code language="python" filename="analyze_me.py">
{code_to_review}
</code>"""
messages_b = [
{"role": "system", "content": system_b},
{"role": "user", "content": prompt_b},
]
print_messages(messages_b)
response_b = chat(client, messages_b)
print(response_b)
# ── Part C: Request a Corrected Function ──────────────────────────────────────
print_separator("Part C Ask for a Corrected Function")
# TODO 7: Pick one buggy function from analyze_me.py (e.g. calculate_statistics).
# Write a new user message — continuing the SAME conversation as Part B —
# that asks the LLM to rewrite that function with all bugs fixed,
# including proper type hints and a docstring.
#
# Key insight: you can reuse the model's previous response by appending it to
# the messages list as an "assistant" message, then adding a new "user" message.
# This is how multi-turn conversations work with the API.
messages_c = messages_b + [
{"role": "assistant", "content": response_b}, # LLM's previous answer
{"role": "user", "content": "Fix all bugs, keep the rest as it is"},
]
print_messages(messages_c)
response_c = chat(client, messages_c)
print(response_c)
# ── Reflection Questions ──────────────────────────────────────────────────────
print_separator("Reflection Questions")
print(
"1. Did the LLM find all 7 bugs? Which did it miss?\n"
"2. How did the <output_format> tag change the structure of the answer?\n"
"3. What is the advantage of continuing a conversation vs. starting fresh?\n"
"4. How would you scale this pattern to a large codebase (many files)?\n"
)