AISE1_CLASS/Prompting Exercise/ex01_xml_prompting.py

143 lines
5.0 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

"""
Exercise 1 Basic XML Structured Prompting
============================================
AISE501 · Prompting in Coding · Spring Semester 2026
Learning goals
--------------
* Connect to the local LLM server and send your first prompt.
* Understand the difference between unstructured and XML-structured prompts.
* See how structure helps the model parse and prioritise different parts
of your request.
Tasks
-----
Part A Run the unstructured prompt (already done for you). Read the response.
Part B Complete the XML-structured version of the same request (TODOs 1-3).
Part C Add a system prompt to set the response style (TODOs 4-5).
"""
from server_utils import chat, get_client, print_messages, print_separator
client = get_client()
# ── Part A: Unstructured (Zero-Shot) Prompt ───────────────────────────────────
# This section is complete. Run it, read the response, then move on.
print_separator("Part A Unstructured Prompt")
unstructured_messages = [
{
"role": "user",
"content": (
"Explain what a Python list comprehension is, "
"give an example that filters even numbers from a list, "
"and list two common mistakes beginners make."
),
}
]
# print_messages(unstructured_messages) # ← always inspect what you send!
# response_a = chat(client, unstructured_messages)
# print(response_a)
# ── Part B: Structured Prompt with XML Tags ───────────────────────────────────
# Use XML tags to structure the same request more precisely.
# Named sections help the model parse and prioritise your intent.
print_separator("Part B Structured Prompt with XML Tags")
# TODO 1: Fill in the three XML sections below.
# Use the same topic as Part A but make each section specific.
#
# <topic> the Python concept to explain
# <example> what the code example should demonstrate
# <focus> two or three specific points you want covered in the answer
#
# Tip: XML tag names are arbitrary — choose names that make sense to a
# human reader and the model will understand them too.
structured_content = """\
<request>
<topic>
Python list comprehensions
</topic>
<example>
Filter even numbers from a list
</example>
<focus>
Syntax overview and two common beginner mistakes
</focus>
</request>"""
# TODO 2: Build the messages list.
# Use structured_content as the content of a "user" message.
#
# Reminder: messages is a list of dicts with keys "role" and "content".
# "role" is one of "system", "user", or "assistant".
structured_messages = [
# TODO: add the user message dict here
{
"role": "user",
"content": structured_content,
}
]
# TODO 3: Call chat() with structured_messages, store the result, print it.
# Compare the output with response_a above.
# Always call print_messages() before chat() to see the full prompt.
# print_messages(structured_messages)
# response_b = chat(client, structured_messages)
# print(response_b)
# ── Part C: Adding a System Prompt ────────────────────────────────────────────
# A system prompt lets you define a persona and global rules for every
# response in the conversation without repeating yourself each time.
print_separator("Part C Adding a System Prompt")
# TODO 4: Write an XML-structured system prompt that defines:
# <persona> who the LLM should be
# <style> tone and formatting rules
# <constraints> length or content limits
#
# Example persona: "experienced Python tutor who always shows code first"
system_content = """\
<request>
<persona>You are a master python developer and teacher</persona>
<style>You follow the PEP 8 style guide</style>
<constraints>Format your response in json</constraints>
</request>
"""
# TODO 5: Build a messages list that puts the system prompt FIRST (role="system"),
# followed by the structured user message from Part B.
# Call chat() and print the result.
#
# Reflection: How did the system prompt change the answer compared to Part B?
messages_c = [
{"role": "system", "content": system_content},
{"role": "user", "content": structured_content}
]
print_messages(messages_c)
response_c = chat(client, messages_c)
print(response_c)
# ── Reflection Questions ──────────────────────────────────────────────────────
print_separator("Reflection Questions")
print(
"1. How did XML structure change the format and depth of the response?\n"
"2. What happens if you use inconsistent or missing closing tags?\n"
"3. When would you NOT bother with XML structure?\n"
"4. How does the system prompt interact with the user message?\n"
)