diff --git a/skills/test/test-case-generator/SKILL.md b/skills/test/test-case-generator/SKILL.md new file mode 100644 index 0000000..20908e8 --- /dev/null +++ b/skills/test/test-case-generator/SKILL.md @@ -0,0 +1,124 @@ +--- +name: test-case-generator +description: 根据网址、需求文档或接口文档,自动分析并生成完整测试用例集,输出 YAML 和 Excel 两种格式文件。适用场景:(1) 给定 URL 地址,分析页面功能并生成 UI/功能测试用例;(2) 给定 REST API / Swagger / OpenAPI 文档,生成接口测试用例;(3) 给定需求文档(PRD / 功能描述),拆解功能点并生成测试用例集。触发词:生成测试用例、写测试用例、测试用例文档、test case、接口测试、页面测试用例、用例集。 +--- + +# 测试用例生成器 + +## 工作流程 + +``` +输入(URL / 需求文档 / 接口文档) + ↓ +Step 1: 分析输入,提取所有功能点/接口 + ↓ +Step 2: 按模块组织,设计测试用例(覆盖正向/异常/边界/权限) + ↓ +Step 3: 输出 YAML 文件(test_cases.yaml) + ↓ +Step 4: 运行脚本,生成 Excel 文件(test_cases.xlsx) +``` + +--- + +## Step 1: 分析输入 + +### 输入类型识别 + +| 输入类型 | 分析重点 | +|----------|----------| +| **URL(页面)** | 访问页面,识别表单、按钮、列表、交互元素,抓取页面结构 | +| **REST API 文档** | 逐接口分析:method、path、请求参数、响应码、业务逻辑 | +| **Swagger/OpenAPI** | 解析所有 endpoint,提取 schema、required 字段、enum 值 | +| **需求文档/PRD** | 拆解功能点,识别核心流程、异常流程、业务规则 | + +若给定 URL,使用浏览器工具访问页面获取实际内容后再设计用例。 + +--- + +## Step 2: 设计测试用例 + +**每个功能点/接口必须覆盖以下维度(酌情取舍):** + +1. **正向** - 合法输入,验证功能正常 +2. **异常/负向** - 非法参数、缺少必填、类型错误 +3. **边界值** - 最大/最小值、临界长度 +4. **权限** - 未登录、token 过期、越权访问 +5. **业务规则** - 状态流转、业务约束条件 + +**每条用例必须标注 `automatable` 字段**,判断规则如下: + +| 值 | 含义 | 适用场景 | +|----|------|----------| +| `是` | 可完全自动化 | API 接口测试、有明确输入输出的功能测试、边界/异常测试、冒烟测试、集成测试 | +| `否` | 必须手动执行 | 需人工判断的 UI 视觉验证、易用性/体验评估、探索性测试、验证码/人机验证 | +| `部分` | 部分可自动化 | 兼容性测试(自动化跑流程+人工验视觉)、安全测试(自动化扫描+人工渗透)、含主观判断步骤的 UI 测试 | + +格式规范和示例:详见 `references/test_case_format.md` + +--- + +## Step 3: 输出 YAML 文件 + +将所有用例按以下结构输出为 `.yaml` 文件,并保存到当前工作目录: + +```yaml +name: "<测试套件名称>" +version: "1.0" +created_at: "<今日日期>" +source: "<输入来源>" +test_cases: + - id: TC-001 + module: "<模块名>" + name: "<用例名称>" + type: "<用例类型>" # 功能测试/边界测试/异常测试/UI测试/安全测试等 + priority: "<优先级>" # 高/中/低 + automatable: "<是/否/部分>" # 是=可自动化,否=需手动,部分=部分可自动化 + preconditions: + - "<前置条件>" + steps: + - step: 1 + action: "<操作步骤>" + expected: "<步骤预期>" + expected_result: "<整体预期结果>" + actual_result: "" + status: "未执行" + remarks: "" +``` + +文件名建议:`<功能名称>_test_cases.yaml` + +--- + +## Step 4: 生成 Excel 文件 + +YAML 文件输出完成后,立即运行以下脚本生成 Excel: + +```bash +python /scripts/generate_excel.py [excel输出路径] +``` + +**示例:** +```bash +python test-case-generator/scripts/generate_excel.py ./user_login_test_cases.yaml +``` + +脚本会自动在同目录生成同名 `.xlsx` 文件,包含两个 sheet: +- **测试用例**:所有用例详情,含颜色标注优先级、隔行底色 +- **统计汇总**:按优先级、类型、模块的数量统计 + +> **依赖**:脚本会自动安装 `openpyxl` 和 `pyyaml`(如未安装) + +--- + +## 输出规范 + +- YAML 和 Excel **保存到用户当前工作目录**(或用户指定目录) +- 文件名前缀反映被测对象,如 `login_api_test_cases.yaml` +- 完成后告知用户两个文件的**完整路径** +- **用例数量**:根据功能复杂度自动判断,通常每个接口/功能点至少 3-5 条用例 +- 优先保证核心流程覆盖,再补充边缘场景 + +## 参考资料 + +- 测试用例字段格式、类型枚举、完整示例:见 `references/test_case_format.md` diff --git a/skills/test/test-case-generator/references/test_case_format.md b/skills/test/test-case-generator/references/test_case_format.md new file mode 100644 index 0000000..ed81124 --- /dev/null +++ b/skills/test/test-case-generator/references/test_case_format.md @@ -0,0 +1,180 @@ +# 测试用例格式规范 + +## YAML 格式结构 + +### 顶层结构 + +```yaml +name: "用例集名称" +version: "1.0" +created_at: "2024-01-01" +source: "来源(URL / 文档名)" +test_cases: + - ... +``` + +### 单条用例字段 + +| 字段 | 必填 | 说明 | +|------|------|------| +| `id` | 是 | 用例ID,如 TC-001 | +| `module` | 是 | 所属模块/功能模块 | +| `name` | 是 | 用例名称,简洁描述测试点 | +| `type` | 是 | 用例类型(见下方枚举) | +| `priority` | 是 | 优先级:高/中/低 | +| `automatable` | 是 | 是否可自动化:是/否/部分(见下方判断规则) | +| `preconditions` | 否 | 前置条件列表 | +| `steps` | 是 | 测试步骤列表 | +| `expected_result` | 是 | 预期结果 | +| `actual_result` | 否 | 实际结果(执行后填写) | +| `status` | 否 | 执行状态,默认"未执行" | +| `remarks` | 否 | 备注 | + +### 自动化可行性判断规则 + +根据用例类型和测试内容,按以下规则填写 `automatable`: + +| 值 | 适用场景 | +|----|----------| +| `是` | API 接口测试、功能测试(有明确输入输出)、边界/异常测试、冒烟测试、集成测试、性能测试(脚本化) | +| `否` | 需人工主观判断的 UI 视觉验证、易用性/体验评估、探索性测试、验证码/人机验证、需实物操作的测试 | +| `部分` | 兼容性测试(自动跑流程 + 人工看视觉)、安全测试(自动扫描 + 人工渗透)、含主观判断步骤的 UI 测试 | + +**快速判断口诀**:能写断言就填"是",靠肉眼感觉填"否",两者都有填"部分"。 + +### 用例类型枚举 + +- `功能测试` - 功能点正常/异常流程 +- `边界测试` - 边界值、临界值 +- `异常测试` - 异常输入、错误码验证 +- `性能测试` - 响应时间、并发量 +- `安全测试` - 权限、鉴权、注入攻击 +- `兼容性测试` - 多端/多浏览器 +- `UI测试` - 页面展示、交互 +- `集成测试` - 接口联调 +- `冒烟测试` - 核心流程快速验证 + +### 优先级说明 + +- `高` (P0/P1) - 核心功能、主流程、必须通过 +- `中` (P2) - 重要功能、常见场景 +- `低` (P3) - 边缘场景、次要功能 + +--- + +## 完整示例 + +### REST API 接口测试用例示例 + +```yaml +name: "用户登录接口测试" +version: "1.0" +created_at: "2024-01-15" +source: "POST /api/v1/auth/login" +test_cases: + - id: TC-001 + module: "用户认证" + name: "正常登录-用户名密码正确" + type: "功能测试" + priority: "高" + automatable: "是" + preconditions: + - "系统服务正常运行" + - "测试账号 test@example.com 已存在且状态正常" + steps: + - step: 1 + action: "发送 POST /api/v1/auth/login 请求,Body: {\"email\":\"test@example.com\",\"password\":\"Test@123\"}" + expected: "HTTP 200 OK" + - step: 2 + action: "检查响应体" + expected: "包含 token 字段,code=0,message=\"success\"" + expected_result: "返回 HTTP 200,响应包含有效 JWT token,用户信息正确" + status: "未执行" + + - id: TC-002 + module: "用户认证" + name: "密码错误-返回401" + type: "异常测试" + priority: "高" + automatable: "是" + preconditions: + - "系统服务正常运行" + steps: + - step: 1 + action: "发送 POST /api/v1/auth/login,密码填写错误值 wrongpassword" + expected: "HTTP 401" + - step: 2 + action: "检查响应体" + expected: "code=401,message 包含'密码错误'或'认证失败'" + expected_result: "返回 HTTP 401,不返回 token" + status: "未执行" + + - id: TC-003 + module: "用户认证" + name: "邮箱格式不合法" + type: "边界测试" + priority: "中" + automatable: "是" + steps: + - step: 1 + action: "发送请求,email 字段传 'notanemail'" + expected: "HTTP 400" + expected_result: "返回 400,提示邮箱格式错误" + status: "未执行" +``` + +### 页面/UI 测试用例示例 + +```yaml +name: "用户注册页面测试" +version: "1.0" +source: "https://example.com/register" +test_cases: + - id: TC-001 + module: "注册页面" + name: "页面正常加载" + type: "UI测试" + priority: "高" + automatable: "是" + steps: + - step: 1 + action: "访问 /register 页面" + expected: "页面加载完成,无报错" + - step: 2 + action: "检查页面元素" + expected: "显示用户名、密码、确认密码、邮箱输入框及注册按钮" + expected_result: "页面正常渲染,所有必要元素可见" + status: "未执行" +``` + +--- + +## 测试用例设计原则 + +### 覆盖维度 + +对每个接口/页面,确保覆盖以下维度: + +1. **正向用例** - 合法输入,验证功能正常 +2. **异常/负向用例** - 非法输入、缺少必填字段、类型错误 +3. **边界值用例** - 最大值、最小值、临界值(如字符串长度限制) +4. **权限用例** - 未登录、越权访问、token 过期 +5. **并发/性能用例**(如有性能要求) + +### 接口测试重点 + +- 请求参数:必填/选填、类型、长度、格式 +- 响应码:2xx/4xx/5xx 各场景 +- 响应体:字段完整性、数据类型、业务逻辑 +- 鉴权:token 有效/无效/过期/权限不足 +- 幂等性:重复请求行为 +- 分页接口:第一页/最后一页/越界页码 + +### 页面测试重点 + +- 页面加载与渲染 +- 表单校验(前端校验) +- 按钮交互与状态变化 +- 跳转逻辑 +- 异常状态(网络错误、空数据) +- 响应式布局(如需) diff --git a/skills/test/test-case-generator/scripts/generate_excel.py b/skills/test/test-case-generator/scripts/generate_excel.py new file mode 100644 index 0000000..68d89cb --- /dev/null +++ b/skills/test/test-case-generator/scripts/generate_excel.py @@ -0,0 +1,502 @@ +#!/usr/bin/env python3 +""" +Generate Excel test case file from YAML test case file. + +Usage: + python generate_excel.py [output.xlsx] [--template /path/to/Template.xlsx] + +Template search order (first match wins): + 1. --template argument (explicit path) + 2. Same directory as the yaml file: Template.xlsx + 3. Parent directories of the yaml file (up to 3 levels): Template.xlsx + 4. Current working directory: Template.xlsx + 5. Fallback: built-in style (original behavior) +""" + +import sys +import os +import shutil +import copy +import yaml +from datetime import datetime + +try: + import openpyxl + from openpyxl.styles import Font, PatternFill, Alignment, Border, Side + from openpyxl.utils import get_column_letter +except ImportError: + print("Installing openpyxl...") + os.system("%s -m pip install openpyxl pyyaml -q" % sys.executable) + import openpyxl + from openpyxl.styles import Font, PatternFill, Alignment, Border, Side + from openpyxl.utils import get_column_letter + + +# ---------- Fallback style constants (used when no template found) ---------- +COLOR_HEADER_BG = "2F5496" +COLOR_HEADER_FG = "FFFFFF" +COLOR_ALT_ROW = "EEF4FF" +COLOR_BORDER = "B8C4D8" + +PRIORITY_COLORS = { + "高": "C00000", + "中": "E07000", + "低": "375623", + "P0": "C00000", + "P1": "C00000", + "P2": "E07000", + "P3": "375623", +} + +FALLBACK_COLUMNS = [ + ("用例ID", 10), + ("模块", 14), + ("用例名称", 30), + ("用例类型", 14), + ("优先级", 10), + ("是否可自动化", 12), + ("前置条件", 30), + ("测试步骤", 45), + ("预期结果", 35), + ("实际结果", 25), + ("执行状态", 12), + ("备注", 20), +] + +AUTOMATABLE_COLORS = { + "是": "375623", # 深绿 + "否": "C00000", # 深红 + "部分": "E07000", # 橙色 +} + +# Template column mapping: column_letter -> (yaml_field_or_callable, default_value) +# Matches Template.xlsx structure: +# A:*用例名称 B:用例编号 C:处理者 D:用例状态 E:用例类型 F:用例等级 +# G:迭代 H:模块 I:需求编号 J:缺陷编号 K:描述 L:前置条件 +# M:归属目录 N:标签 O:自定义字段1 P:自定义字段2 Q:测试步骤模式 +# R:测试步骤1 S:预期结果1 T:测试步骤2 U:预期结果2 ... AD:测试步骤7 AE:预期结果7 +TEMPLATE_COLUMN_MAP = { + "A": "name", + "B": "id", + "C": "", # 处理者 - empty + "D": "status", + "E": "type", + "F": "priority", # will be mapped to Priority 1/2/3 + "G": "", # 迭代 - empty + "H": "module", + "I": "", # 需求编号 - empty + "J": "", # 缺陷编号 - empty + "K": "expected_result", + "L": "preconditions", + "M": "module", # 归属目录 = module + "N": "type", # 标签 = type + "O": "automatable", # 自定义字段1 = 是否可自动化 + "P": "", + "Q": "__steps_mode__", # always "步骤" + # R-AE: steps 1-7 (action/expected pairs) +} + +PRIORITY_MAP = { + "高": "Priority 1", + "中": "Priority 2", + "低": "Priority 3", + "P0": "Priority 0", + "P1": "Priority 1", + "P2": "Priority 2", + "P3": "Priority 3", + "P4": "Priority 4", +} + +MAX_STEPS = 7 # Template supports 7 steps (columns R-AE) + + +# ---------- Helpers ---------- + +def thin_border(): + side = Side(style="thin", color=COLOR_BORDER) + return Border(left=side, right=side, top=side, bottom=side) + + +def format_list(value) -> str: + if not value: + return "" + if isinstance(value, list): + return "\n".join("- %s" % item for item in value) + return str(value) + + +def format_steps_fallback(steps) -> str: + """Format steps as single string for fallback (non-template) mode.""" + if not steps: + return "" + if isinstance(steps, str): + return steps + lines = [] + for i, step in enumerate(steps, start=1): + if isinstance(step, dict): + action = step.get("action", step.get("step", "")) + expected = step.get("expected", "") + line = "%d. %s" % (i, action) + if expected: + line += "\n -> 期望:%s" % expected + else: + line = "%d. %s" % (i, step) + lines.append(line) + return "\n".join(lines) + + +def parse_steps(steps): + """Parse steps into list of (action, expected) tuples, max MAX_STEPS.""" + if not steps: + return [] + if isinstance(steps, str): + return [(steps, "")] + result = [] + for step in steps[:MAX_STEPS]: + if isinstance(step, dict): + action = step.get("action", step.get("step", "")) + expected = step.get("expected", "") + else: + action = str(step) + expected = "" + result.append((action, expected)) + return result + + +def load_yaml(path: str) -> dict: + with open(path, "r", encoding="utf-8") as f: + return yaml.safe_load(f) + + +def make_header_style(size=11): + return ( + Font(name="Microsoft YaHei", bold=True, color=COLOR_HEADER_FG, size=size), + PatternFill("solid", fgColor=COLOR_HEADER_BG), + Alignment(horizontal="center", vertical="center", wrap_text=True), + ) + + +def find_template(yaml_path: str, explicit_template: str = None) -> str: + """Search for Template.xlsx, return path if found, else None.""" + if explicit_template and os.path.isfile(explicit_template): + return explicit_template + + candidates = [] + # 1. Same dir as yaml + candidates.append(os.path.join(os.path.dirname(os.path.abspath(yaml_path)), "Template.xlsx")) + # 2. Parent directories (up to 3 levels) + d = os.path.dirname(os.path.abspath(yaml_path)) + for _ in range(3): + d = os.path.dirname(d) + candidates.append(os.path.join(d, "Template.xlsx")) + # 3. Current working directory + candidates.append(os.path.join(os.getcwd(), "Template.xlsx")) + + for path in candidates: + if os.path.isfile(path): + return path + return None + + +def copy_cell_style(src_cell, dst_cell): + """Copy style attributes from src_cell to dst_cell.""" + if src_cell.font: + dst_cell.font = copy.copy(src_cell.font) + if src_cell.fill: + dst_cell.fill = copy.copy(src_cell.fill) + if src_cell.border: + dst_cell.border = copy.copy(src_cell.border) + if src_cell.alignment: + dst_cell.alignment = copy.copy(src_cell.alignment) + + +# ---------- Template-based generation ---------- + +def generate_with_template(wb_template, test_cases: list, suite_name: str, output_path: str): + """Write test cases into a copy of the template workbook.""" + # Work on the first sheet (sheet1) + ws = wb_template.worksheets[0] + + # Delete all rows except header (row 1) + if ws.max_row > 1: + ws.delete_rows(2, ws.max_row - 1) + + # Capture sample row style from original template row 2 for reuse + # (already deleted - we'll use a default data cell style instead) + data_font = Font(name="宋体", size=12) + data_align = Alignment(horizontal="center", vertical="center", wrap_text=True) + data_align_left = Alignment(horizontal="left", vertical="center", wrap_text=True) + + for row_idx, tc_data in enumerate(test_cases): + row_num = row_idx + 2 # data starts at row 2 + + priority_raw = str(tc_data.get("priority", "中")) + priority_val = PRIORITY_MAP.get(priority_raw, priority_raw) + precond_str = format_list(tc_data.get("preconditions", tc_data.get("precondition", ""))) + steps_parsed = parse_steps(tc_data.get("steps", tc_data.get("test_steps", []))) + + # Build column-to-value mapping + col_values = {} + for col_letter, field in TEMPLATE_COLUMN_MAP.items(): + if not field: + col_values[col_letter] = "" + elif field == "priority": + col_values[col_letter] = priority_val + elif field == "preconditions": + col_values[col_letter] = precond_str + elif field == "expected_result": + col_values[col_letter] = tc_data.get("expected_result", tc_data.get("expected", "")) + elif field == "status": + col_values[col_letter] = tc_data.get("status", "未执行") + elif field == "__steps_mode__": + col_values[col_letter] = "步骤" if steps_parsed else "" + else: + col_values[col_letter] = tc_data.get(field, tc_data.get("title" if field == "name" else field, "")) + + # Steps: columns R/S, T/U, V/W, X/Y, Z/AA, AB/AC, AD/AE + step_col_pairs = [ + ("R", "S"), ("T", "U"), ("V", "W"), + ("X", "Y"), ("Z", "AA"), ("AB", "AC"), ("AD", "AE"), + ] + for i, (act_col, exp_col) in enumerate(step_col_pairs): + if i < len(steps_parsed): + col_values[act_col] = steps_parsed[i][0] + col_values[exp_col] = steps_parsed[i][1] + else: + col_values[act_col] = "" + col_values[exp_col] = "" + + # Write cells + for col_idx in range(1, ws.max_column + 1): + col_letter = get_column_letter(col_idx) + value = col_values.get(col_letter, "") + cell = ws.cell(row=row_num, column=col_idx, value=str(value) if value else "") + cell.font = data_font + # Left-align text-heavy columns + if col_letter in ("A", "K", "L", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "AA", "AB", "AC", "AD", "AE"): + cell.alignment = data_align_left + else: + cell.alignment = data_align + + # Adjust row height based on content + step_count = len(steps_parsed) + ws.row_dimensions[row_num].height = max(20, step_count * 18) + + # Add summary sheet + if "统计汇总" in wb_template.sheetnames: + del wb_template["统计汇总"] + ws_summary = wb_template.create_sheet("统计汇总") + _write_summary(ws_summary, test_cases, suite_name) + + wb_template.save(output_path) + print("[OK] Excel generated (template mode): %s" % output_path) + print(" Total cases: %d" % len(test_cases)) + print(" Template columns: %d" % ws.max_column) + return output_path + + +# ---------- Fallback (built-in style) generation ---------- + +def generate_fallback(test_cases: list, suite_name: str, output_path: str): + """Original built-in style generation.""" + wb = openpyxl.Workbook() + ws = wb.active + ws.title = "测试用例" + + h_font, h_fill, h_align = make_header_style(13) + + title_text = "%s 生成时间:%s" % (suite_name, datetime.now().strftime("%Y-%m-%d %H:%M")) + ws.merge_cells(start_row=1, start_column=1, end_row=1, end_column=len(FALLBACK_COLUMNS)) + tc = ws.cell(row=1, column=1, value=title_text) + tc.font = h_font + tc.fill = h_fill + tc.alignment = h_align + ws.row_dimensions[1].height = 32 + + col_font, col_fill, col_align = make_header_style(11) + for col_idx, (col_name, col_width) in enumerate(FALLBACK_COLUMNS, start=1): + cell = ws.cell(row=2, column=col_idx, value=col_name) + cell.font = col_font + cell.fill = col_fill + cell.alignment = col_align + cell.border = thin_border() + ws.column_dimensions[get_column_letter(col_idx)].width = col_width + ws.row_dimensions[2].height = 28 + ws.freeze_panes = "A3" + + wrap_top = Alignment(vertical="top", wrap_text=True) + alt_fill = PatternFill("solid", fgColor=COLOR_ALT_ROW) + + for i, tc_data in enumerate(test_cases): + row_num = i + 3 + is_alt = (i % 2 == 1) + + steps_str = format_steps_fallback(tc_data.get("steps", tc_data.get("test_steps", ""))) + precond_str = format_list(tc_data.get("preconditions", tc_data.get("precondition", ""))) + priority = str(tc_data.get("priority", "中")) + automatable = str(tc_data.get("automatable", "")) + + row_values = [ + tc_data.get("id", ""), + tc_data.get("module", tc_data.get("feature", "")), + tc_data.get("name", tc_data.get("title", "")), + tc_data.get("type", tc_data.get("test_type", "功能测试")), + priority, + automatable, + precond_str, + steps_str, + tc_data.get("expected_result", tc_data.get("expected", "")), + tc_data.get("actual_result", ""), + tc_data.get("status", "未执行"), + tc_data.get("remarks", tc_data.get("notes", "")), + ] + + for col_idx, value in enumerate(row_values, start=1): + cell = ws.cell(row=row_num, column=col_idx, value=str(value) if value else "") + cell.font = Font(name="Microsoft YaHei", size=10) + cell.alignment = wrap_top + cell.border = thin_border() + if is_alt: + cell.fill = alt_fill + + if priority in PRIORITY_COLORS: + pri_cell = ws.cell(row=row_num, column=5) + pri_cell.font = Font(name="Microsoft YaHei", size=10, bold=True, color=PRIORITY_COLORS[priority]) + + if automatable in AUTOMATABLE_COLORS: + auto_cell = ws.cell(row=row_num, column=6) + auto_cell.font = Font(name="Microsoft YaHei", size=10, bold=True, color=AUTOMATABLE_COLORS[automatable]) + + step_lines = steps_str.count("\n") + 1 if steps_str else 1 + ws.row_dimensions[row_num].height = max(20, step_lines * 15) + + ws2 = wb.create_sheet("统计汇总") + _write_summary(ws2, test_cases, suite_name) + + wb.save(output_path) + print("[OK] Excel generated (built-in style): %s" % output_path) + print(" Total cases: %d" % len(test_cases)) + return output_path + + +# ---------- Summary sheet ---------- + +def _write_summary(ws, test_cases: list, suite_name: str): + ws.column_dimensions["A"].width = 22 + ws.column_dimensions["B"].width = 12 + + h_font, h_fill, _ = make_header_style(11) + center = Alignment(horizontal="center", vertical="center") + border = thin_border() + + def hdr(row, col, val): + c = ws.cell(row=row, column=col, value=val) + c.font = h_font + c.fill = h_fill + c.alignment = center + c.border = border + ws.row_dimensions[row].height = 24 + + def val_cell(row, col, v): + c = ws.cell(row=row, column=col, value=v) + c.font = Font(name="Microsoft YaHei", size=11) + c.alignment = center + c.border = border + + title_cell = ws.cell(row=1, column=1, value="测试套件:%s" % suite_name) + title_cell.font = Font(name="Microsoft YaHei", bold=True, size=13) + ws.row_dimensions[1].height = 28 + + priorities, types, modules = {}, {}, {} + automatable_stats = {} + for tc in test_cases: + p = tc.get("priority", "未知") + priorities[p] = priorities.get(p, 0) + 1 + t = tc.get("type", tc.get("test_type", "未知")) + types[t] = types.get(t, 0) + 1 + m = tc.get("module", tc.get("feature", "未知")) + modules[m] = modules.get(m, 0) + 1 + a = tc.get("automatable", "未标注") + automatable_stats[a] = automatable_stats.get(a, 0) + 1 + + r = 3 + hdr(r, 1, "优先级"); hdr(r, 2, "数量"); r += 1 + for p, cnt in sorted(priorities.items()): + val_cell(r, 1, p); val_cell(r, 2, cnt); r += 1 + val_cell(r, 1, "合计"); val_cell(r, 2, len(test_cases)); r += 2 + + hdr(r, 1, "用例类型"); hdr(r, 2, "数量"); r += 1 + for t, cnt in sorted(types.items()): + val_cell(r, 1, t); val_cell(r, 2, cnt); r += 1 + r += 1 + + hdr(r, 1, "是否可自动化"); hdr(r, 2, "数量"); r += 1 + for a, cnt in sorted(automatable_stats.items()): + val_cell(r, 1, a); val_cell(r, 2, cnt); r += 1 + r += 1 + + hdr(r, 1, "模块"); hdr(r, 2, "数量"); r += 1 + for m, cnt in sorted(modules.items()): + val_cell(r, 1, m); val_cell(r, 2, cnt); r += 1 + + +# ---------- Entry point ---------- + +def generate_excel(yaml_path: str, output_path: str = None, template_path: str = None): + data = load_yaml(yaml_path) + + if isinstance(data, list): + test_cases = data + suite_name = "测试用例集" + else: + test_cases = data.get("test_cases", data.get("cases", [])) + suite_name = ( + data.get("test_suite", {}).get("name", "") + or data.get("name", "测试用例集") + ) + + if not output_path: + base = os.path.splitext(yaml_path)[0] + output_path = base + ".xlsx" + + # Search for template + found_template = find_template(yaml_path, template_path) + + if found_template: + print("[INFO] Using template: %s" % found_template) + # Copy template to a temp location to avoid modifying the original + tmp_path = output_path + ".tmp.xlsx" + shutil.copy2(found_template, tmp_path) + try: + wb = openpyxl.load_workbook(tmp_path) + result = generate_with_template(wb, test_cases, suite_name, output_path) + finally: + if os.path.exists(tmp_path): + os.remove(tmp_path) + return result + else: + print("[INFO] Template.xlsx not found, using built-in style") + return generate_fallback(test_cases, suite_name, output_path) + + +if __name__ == "__main__": + if len(sys.argv) < 2: + print("Usage: python generate_excel.py [output.xlsx] [--template path]") + sys.exit(1) + + yaml_path = sys.argv[1] + out_path = None + tpl_path = None + + i = 2 + while i < len(sys.argv): + if sys.argv[i] == "--template" and i + 1 < len(sys.argv): + tpl_path = sys.argv[i + 1] + i += 2 + elif not out_path and not sys.argv[i].startswith("--"): + out_path = sys.argv[i] + i += 1 + else: + i += 1 + + generate_excel(yaml_path, out_path, tpl_path) diff --git a/skills/test/test-case-runner/SKILL.md b/skills/test/test-case-runner/SKILL.md new file mode 100644 index 0000000..6541d74 --- /dev/null +++ b/skills/test/test-case-runner/SKILL.md @@ -0,0 +1,105 @@ +--- +name: test-case-runner +description: 读取测试用例文件(YAML格式),自动生成 Python pytest 自动化测试脚本,执行测试并输出 HTML 格式的可视化测试报告。适用场景:(1) 给定测试用例 YAML 文件,生成 pytest 自动化脚本;(2) 执行已有 pytest 测试文件并生成 HTML 报告;(3) 端到端完成:YAML 用例 → pytest 脚本 → 执行 → HTML 报告。触发词:执行测试用例、运行测试、生成测试脚本、自动化测试脚本、生成测试报告、pytest、HTML报告、test report。支持 API 接口测试(requests)和 UI 页面测试(Playwright),自动识别类型。 +--- + +# 测试用例执行器 + +## 工作模式 + +根据用户提供的内容选择对应模式: + +| 输入 | 执行路径 | +|------|----------| +| YAML 测试用例文件 | 生成 pytest 脚本 → 执行 → HTML 报告(完整流程) | +| 已有 pytest .py 文件 | 直接执行 → HTML 报告 | +| 仅需生成脚本 | 只运行 Step 1,不执行测试 | + +--- + +## Step 1: 生成 pytest 脚本 + +**从 YAML 测试用例文件生成 pytest 脚本:** + +```bash +python /scripts/generate_pytest.py [output_dir] +``` + +脚本自动识别测试类型: +- **API 模式**:`source` 字段以 HTTP 方法开头(`POST /api/v1/...`)→ 使用 `requests` +- **UI 模式**:`source` 字段为 URL(`https://...`)或用例 type 含 "UI" → 使用 `Playwright` + +**输出文件:** +- `test_<套件名>.py` — 主测试类文件 +- `conftest.py` — 共享 fixture(仅在不存在时生成,避免覆盖) + +> 生成的测试文件含 `# TODO` 注释,提示用户补充 payload、选择器等具体值。 +> 若用户需要填充细节,读取 `references/pytest_patterns.md` 获取代码模式。 + +--- + +## Step 2: 执行测试并生成 HTML 报告 + +```bash +python /scripts/run_and_report.py [report.html] +``` + +**执行流程:** +1. 自动安装 `pytest-json-report`(用于捕获结构化结果) +2. 运行 pytest,收集 pass/fail/error/skip 结果 +3. 生成自包含 HTML 报告(无需额外依赖可直接浏览器打开) + +**报告内容:** +- 统计卡片:总数、通过、失败、错误、跳过 +- 彩色进度条(通过率可视化) +- 每条用例的状态、耗时、失败详情(可展开) +- 控制台输出(带颜色高亮) + +--- + +## Step 3: 完整流程示例 + +```bash +# 1. 生成测试脚本 +python test-case-runner/scripts/generate_pytest.py ./login_test_cases.yaml ./tests/ + +# 2. 安装依赖(API 测试) +pip install pytest requests + +# 3. 配置环境(设置被测服务地址) +export TEST_BASE_URL=http://your-server:8080 + +# 4. 执行测试 + 生成 HTML 报告 +python test-case-runner/scripts/run_and_report.py ./tests/ ./test_report.html +``` + +完成后告知用户: +- pytest 脚本文件的完整路径 +- HTML 报告的完整路径 +- 测试结果摘要(总数/通过/失败) + +--- + +## 环境变量 + +| 变量 | 说明 | 默认值 | +|------|------|--------| +| `TEST_BASE_URL` | 被测服务基础地址 | `http://localhost:8080` | +| `TEST_API_TOKEN` | API 鉴权 token(API 测试用) | 空 | + +--- + +## 依赖安装 + +```bash +# API 测试 +pip install pytest requests pytest-json-report + +# UI 测试(Playwright) +pip install pytest playwright pytest-json-report +playwright install chromium +``` + +## 参考资料 + +- pytest 代码模式、参数化、conftest 写法:见 `references/pytest_patterns.md` diff --git a/skills/test/test-case-runner/references/pytest_patterns.md b/skills/test/test-case-runner/references/pytest_patterns.md new file mode 100644 index 0000000..315832d --- /dev/null +++ b/skills/test/test-case-runner/references/pytest_patterns.md @@ -0,0 +1,203 @@ +# pytest 测试模式参考 + +## 目录 +1. [API 测试模式](#api) +2. [UI 测试模式 (Playwright)](#ui) +3. [conftest.py 模式](#conftest) +4. [常用断言](#assertions) +5. [运行命令速查](#commands) + +--- + +## 1. API 测试模式 {#api} + +### 基本结构 +```python +import pytest +import requests + +class TestLoginAPI: + def test_login_success(self, session, base_url): + url = f"{base_url}/api/v1/auth/login" + payload = {"email": "test@example.com", "password": "Test@123"} + response = session.post(url, json=payload) + assert response.status_code == 200 + data = response.json() + assert "token" in data + assert data["code"] == 0 +``` + +### 参数化测试(覆盖多场景) +```python +@pytest.mark.parametrize("email,password,expected_code", [ + ("valid@test.com", "Valid@123", 200), + ("valid@test.com", "wrongpwd", 401), + ("notanemail", "anypass", 400), +]) +def test_login_parametrize(self, session, base_url, email, password, expected_code): + resp = session.post(f"{base_url}/api/v1/auth/login", + json={"email": email, "password": password}) + assert resp.status_code == expected_code +``` + +### 鉴权测试 +```python +def test_unauthorized_access(self, base_url): + # No auth header + resp = requests.get(f"{base_url}/api/v1/profile") + assert resp.status_code == 401 + +def test_expired_token(self, base_url): + headers = {"Authorization": "Bearer expired_token_here"} + resp = requests.get(f"{base_url}/api/v1/profile", headers=headers) + assert resp.status_code in (401, 403) +``` + +### 边界值测试(字段长度) +```python +def test_username_max_length(self, session, base_url): + payload = {"username": "a" * 256, "password": "Valid@123"} + resp = session.post(f"{base_url}/api/v1/register", json=payload) + assert resp.status_code == 400 + assert "username" in resp.json().get("message", "").lower() +``` + +--- + +## 2. UI 测试模式 (Playwright) {#ui} + +### 基本结构 +```python +import pytest +from playwright.sync_api import expect + +class TestLoginPage: + def test_page_loads(self, page, base_url): + page.goto(f"{base_url}/login") + page.wait_for_load_state("networkidle") + expect(page.locator("input[name='email']")).to_be_visible() + expect(page.locator("button[type='submit']")).to_be_visible() + + def test_login_success(self, page, base_url): + page.goto(f"{base_url}/login") + page.fill("input[name='email']", "test@example.com") + page.fill("input[name='password']", "Test@123") + page.click("button[type='submit']") + page.wait_for_url("**/dashboard") + expect(page.locator(".user-avatar")).to_be_visible() +``` + +### 截图与调试 +```python +def test_with_screenshot(self, page, base_url): + page.goto(f"{base_url}/login") + page.screenshot(path="screenshots/login_page.png") + # ... test steps + page.screenshot(path="screenshots/after_login.png") +``` + +### 网络拦截(模拟接口) +```python +def test_with_mock_api(self, page, base_url): + page.route("**/api/v1/auth/login", lambda route: route.fulfill( + status=200, + body='{"token": "mock_token", "code": 0}', + headers={"Content-Type": "application/json"} + )) + page.goto(f"{base_url}/login") + page.fill("input[name='email']", "any@test.com") + page.click("button[type='submit']") + expect(page.locator(".dashboard")).to_be_visible() +``` + +--- + +## 3. conftest.py 模式 {#conftest} + +### API conftest +```python +import pytest, requests, os + +@pytest.fixture(scope="session") +def base_url(): + return os.getenv("TEST_BASE_URL", "http://localhost:8080") + +@pytest.fixture(scope="session") +def auth_token(base_url): + resp = requests.post(f"{base_url}/api/v1/auth/login", + json={"email": "admin@test.com", "password": "Admin@123"}) + return resp.json()["token"] + +@pytest.fixture(scope="session") +def session(auth_token): + s = requests.Session() + s.headers.update({"Authorization": f"Bearer {auth_token}"}) + return s +``` + +### Playwright conftest +```python +import pytest +from playwright.sync_api import sync_playwright + +@pytest.fixture(scope="session") +def browser(): + with sync_playwright() as p: + yield p.chromium.launch(headless=True) + +@pytest.fixture +def page(browser): + context = browser.new_context(viewport={"width": 1280, "height": 720}) + page = context.new_page() + yield page + context.close() +``` + +--- + +## 4. 常用断言 {#assertions} + +```python +# HTTP 状态码 +assert response.status_code == 200 +assert response.status_code in (200, 201) + +# 响应体字段 +data = response.json() +assert data["code"] == 0 +assert "token" in data +assert data["user"]["email"] == "test@example.com" +assert len(data["items"]) > 0 + +# Playwright 元素 +expect(page.locator("h1")).to_have_text("欢迎登录") +expect(page.locator(".error-msg")).to_be_visible() +expect(page.locator("input[name='email']")).to_be_enabled() +expect(page.locator(".loading")).not_to_be_visible() +``` + +--- + +## 5. 运行命令速查 {#commands} + +```bash +# 安装依赖(API 测试) +pip install pytest requests pytest-json-report + +# 安装依赖(UI 测试) +pip install pytest playwright pytest-json-report +playwright install chromium + +# 运行全部测试 +pytest tests/ -v + +# 只运行某个优先级(需要给测试加 mark) +pytest tests/ -m "high_priority" -v + +# 运行后生成 HTML 报告 +python run_and_report.py tests/test_login.py report.html + +# 设置环境变量 +export TEST_BASE_URL=http://your-server:8080 +export TEST_API_TOKEN=your_token_here +``` diff --git a/skills/test/test-case-runner/scripts/generate_pytest.py b/skills/test/test-case-runner/scripts/generate_pytest.py new file mode 100644 index 0000000..08c2b92 --- /dev/null +++ b/skills/test/test-case-runner/scripts/generate_pytest.py @@ -0,0 +1,403 @@ +#!/usr/bin/env python3 +""" +Generate pytest automation test scripts from YAML test case files. + +Usage: + python generate_pytest.py [output_dir] + +Supports: + - API tests (type contains: 功能测试/异常测试/边界测试/安全测试/性能测试) + - UI tests (type contains: UI测试/页面测试) + +Auto-detects test mode from 'source' field: + - Contains HTTP method (GET/POST/PUT/DELETE/PATCH) → API mode + - Contains http:// or https:// URL → UI mode (Playwright) + - Falls back to checking test case types + +Output files: + - test_.py main pytest file + - conftest.py shared fixtures +""" + +import sys, os, re, yaml +from datetime import datetime +from pathlib import Path + + +# ───────────────────────────────────────────── +# Helpers +# ───────────────────────────────────────────── + +def load_yaml(path): + with open(path, "r", encoding="utf-8") as f: + return yaml.safe_load(f) + +def slugify(text): + """Convert Chinese/mixed text to a valid Python identifier.""" + text = str(text) + text = re.sub(r"[^\w\u4e00-\u9fff]", "_", text) + text = re.sub(r"_+", "_", text).strip("_") + return text or "case" + +def indent(text, n=4): + pad = " " * n + return "\n".join(pad + line for line in text.splitlines()) + +def format_steps_comment(steps): + if not steps: + return "" + lines = [] + for i, s in enumerate(steps, 1): + if isinstance(s, dict): + action = s.get("action", s.get("step", "")) + exp = s.get("expected", "") + lines.append(f"# Step {i}: {action}") + if exp: + lines.append(f"# -> expect: {exp}") + else: + lines.append(f"# Step {i}: {s}") + return "\n".join(lines) + +def format_precond_comment(preconditions): + if not preconditions: + return "" + lines = ["# Preconditions:"] + if isinstance(preconditions, list): + for p in preconditions: + lines.append(f"# - {p}") + else: + lines.append(f"# {preconditions}") + return "\n".join(lines) + + +# ───────────────────────────────────────────── +# Detect mode +# ───────────────────────────────────────────── + +HTTP_METHODS = {"GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS"} + +def detect_mode(data): + """Return 'api' or 'ui'.""" + source = str(data.get("source", "")).upper() + # Check for HTTP method prefix (e.g. "POST /api/v1/login") + first_word = source.split()[0] if source.split() else "" + if first_word in HTTP_METHODS: + return "api" + # Check for URL + src_lower = str(data.get("source", "")).lower() + if src_lower.startswith("http://") or src_lower.startswith("https://"): + return "ui" + # Check case types + all_types = " ".join( + str(tc.get("type", "")) for tc in data.get("test_cases", []) + ) + if "UI" in all_types or "页面" in all_types: + return "ui" + return "api" + +def parse_api_info(source): + """Parse 'POST /api/v1/users' → (method, path).""" + parts = source.strip().split(None, 1) + if len(parts) == 2 and parts[0].upper() in HTTP_METHODS: + return parts[0].upper(), parts[1] + return "GET", source + + +# ───────────────────────────────────────────── +# API test generation +# ───────────────────────────────────────────── + +API_CONFTEST = '''\ +import pytest +import requests +import os + +BASE_URL = os.getenv("TEST_BASE_URL", "http://localhost:8080") +API_TOKEN = os.getenv("TEST_API_TOKEN", "") + +@pytest.fixture(scope="session") +def base_url(): + return BASE_URL.rstrip("/") + +@pytest.fixture(scope="session") +def auth_headers(): + if API_TOKEN: + return {"Authorization": f"Bearer {API_TOKEN}"} + return {} + +@pytest.fixture(scope="session") +def session(auth_headers): + s = requests.Session() + s.headers.update(auth_headers) + return s +''' + +def build_api_test_method(tc, method, path): + tc_id = tc.get("id", "") + name = tc.get("name", "") + priority = tc.get("priority", "中") + steps = tc.get("steps", []) + precond = tc.get("preconditions", []) + expected = tc.get("expected_result", "") + + func_name = f"test_{slugify(tc_id)}_{slugify(name)}"[:80] + + # Try to extract expected status code from steps or expected_result + status_codes = re.findall(r"\b([1-5]\d{2})\b", str(steps) + str(expected)) + expected_status = status_codes[0] if status_codes else "200" + + lines = [] + lines.append(f"def {func_name}(self, session, base_url):") + lines.append(f' """') + lines.append(f' [{tc_id}] {name}') + lines.append(f' Priority: {priority}') + if expected: + lines.append(f' Expected: {expected}') + lines.append(f' """') + + precond_comment = format_precond_comment(precond) + if precond_comment: + for line in precond_comment.splitlines(): + lines.append(f" {line}") + + steps_comment = format_steps_comment(steps) + if steps_comment: + lines.append(" # --- Steps ---") + for line in steps_comment.splitlines(): + lines.append(f" {line}") + + lines.append(f" url = f\"{{base_url}}{path}\"") + + # Build request body template from step action hints + step_texts = " ".join( + (s.get("action", "") if isinstance(s, dict) else str(s)) for s in steps + ) + # Only treat as JSON if it contains key:value pairs (has a colon inside braces) + json_match = re.search(r"\{[^{}]*:[^{}]*\}", step_texts) + if json_match: + lines.append(f" payload = {json_match.group()} # TODO: adjust payload") + else: + lines.append(f" payload = {{}} # TODO: fill request body") + + lines.append(f" response = session.{method.lower()}(url, json=payload)") + lines.append(f" assert response.status_code == {expected_status}, \\") + lines.append(f" f\"Expected {expected_status}, got {{response.status_code}}: {{response.text}}\"") + lines.append(f" # TODO: add more assertions based on response body") + lines.append(f" return response") + + return "\n".join(lines) + + +def generate_api_tests(data, out_dir): + suite_name = data.get("name", "TestSuite") + source = data.get("source", "") + cases = data.get("test_cases", []) + method, path = parse_api_info(source) + + class_name = "Test" + re.sub(r"[^a-zA-Z0-9]", "", suite_name) or "TestSuite" + slug = slugify(suite_name).lower() + out_file = Path(out_dir) / f"test_{slug}.py" + + lines = [] + lines.append(f'"""') + lines.append(f'Auto-generated API tests for: {suite_name}') + lines.append(f'Source : {source}') + lines.append(f'Generated: {datetime.now().strftime("%Y-%m-%d %H:%M")}') + lines.append(f'"""') + lines.append("") + lines.append("import pytest") + lines.append("") + lines.append("") + lines.append(f"class {class_name}:") + lines.append(f' """Tests for {source}"""') + lines.append("") + + for tc in cases: + method_tc = tc.get("method", method) # case-level method override + path_tc = tc.get("path", path) + body = build_api_test_method(tc, method_tc, path_tc) + for line in body.splitlines(): + lines.append(" " + line) + lines.append("") + + content = "\n".join(lines) + out_file.write_text(content, encoding="utf-8") + return str(out_file) + + +# ───────────────────────────────────────────── +# UI test generation (Playwright) +# ───────────────────────────────────────────── + +UI_CONFTEST = '''\ +import pytest +from playwright.sync_api import sync_playwright, Page +import os + +BASE_URL = os.getenv("TEST_BASE_URL", "http://localhost:3000") + +@pytest.fixture(scope="session") +def browser(): + with sync_playwright() as p: + browser = p.chromium.launch(headless=True) + yield browser + browser.close() + +@pytest.fixture +def page(browser) -> Page: + context = browser.new_context() + page = context.new_page() + yield page + context.close() + +@pytest.fixture +def base_url(): + return BASE_URL.rstrip("/") +''' + +def build_ui_test_method(tc, source_url): + tc_id = tc.get("id", "") + name = tc.get("name", "") + priority = tc.get("priority", "中") + steps = tc.get("steps", []) + precond = tc.get("preconditions", []) + expected = tc.get("expected_result", "") + + func_name = f"test_{slugify(tc_id)}_{slugify(name)}"[:80] + + lines = [] + lines.append(f"def {func_name}(self, page, base_url):") + lines.append(f' """') + lines.append(f' [{tc_id}] {name}') + lines.append(f' Priority: {priority}') + if expected: + lines.append(f' Expected: {expected}') + lines.append(f' """') + + precond_comment = format_precond_comment(precond) + if precond_comment: + for line in precond_comment.splitlines(): + lines.append(f" {line}") + + # Determine page path + try: + from urllib.parse import urlparse + parsed = urlparse(source_url) + page_path = parsed.path or "/" + except Exception: + page_path = "/" + + lines.append(f" page.goto(f\"{{base_url}}{page_path}\")") + lines.append(f" page.wait_for_load_state('networkidle')") + lines.append("") + + # Convert steps to Playwright actions + for i, step in enumerate(steps, 1): + if isinstance(step, dict): + action = step.get("action", "") + exp = step.get("expected", "") + else: + action, exp = str(step), "" + + lines.append(f" # Step {i}: {action}") + + action_lower = action.lower() + if any(k in action_lower for k in ["点击", "click", "按钮", "button"]): + lines.append(f" # page.click('selector') # TODO: update selector") + elif any(k in action_lower for k in ["输入", "填写", "fill", "input"]): + lines.append(f" # page.fill('selector', 'value') # TODO: update selector & value") + elif any(k in action_lower for k in ["选择", "select"]): + lines.append(f" # page.select_option('selector', 'value') # TODO") + elif any(k in action_lower for k in ["检查", "验证", "断言", "assert", "expect"]): + lines.append(f" # expect(page.locator('selector')).to_be_visible() # TODO") + else: + lines.append(f" pass # TODO: implement step") + + if exp: + lines.append(f" # Expected: {exp}") + lines.append("") + + lines.append(f" # Final assertion: {expected}") + lines.append(f" # TODO: add assertions") + + return "\n".join(lines) + + +def generate_ui_tests(data, out_dir): + suite_name = data.get("name", "TestSuite") + source = data.get("source", "") + cases = data.get("test_cases", []) + + class_name = "Test" + re.sub(r"[^a-zA-Z0-9]", "", suite_name) or "TestSuite" + slug = slugify(suite_name).lower() + out_file = Path(out_dir) / f"test_{slug}.py" + + lines = [] + lines.append(f'"""') + lines.append(f'Auto-generated Playwright UI tests for: {suite_name}') + lines.append(f'Source : {source}') + lines.append(f'Generated: {datetime.now().strftime("%Y-%m-%d %H:%M")}') + lines.append(f'"""') + lines.append("") + lines.append("import pytest") + lines.append("from playwright.sync_api import expect") + lines.append("") + lines.append("") + lines.append(f"class {class_name}:") + lines.append(f' """UI tests for {source}"""') + lines.append("") + + for tc in cases: + body = build_ui_test_method(tc, source) + for line in body.splitlines(): + lines.append(" " + line) + lines.append("") + + content = "\n".join(lines) + out_file.write_text(content, encoding="utf-8") + return str(out_file) + + +# ───────────────────────────────────────────── +# Main +# ───────────────────────────────────────────── + +def main(): + if len(sys.argv) < 2: + print("Usage: python generate_pytest.py [output_dir]") + sys.exit(1) + + yaml_path = sys.argv[1] + out_dir = sys.argv[2] if len(sys.argv) > 2 else str(Path(yaml_path).parent) + Path(out_dir).mkdir(parents=True, exist_ok=True) + + data = load_yaml(yaml_path) + mode = detect_mode(data) + + print(f"[INFO] Detected mode: {mode.upper()}") + + # Write conftest.py + conftest_path = Path(out_dir) / "conftest.py" + if not conftest_path.exists(): + conftest_content = API_CONFTEST if mode == "api" else UI_CONFTEST + conftest_path.write_text(conftest_content, encoding="utf-8") + print(f"[OK] conftest.py -> {conftest_path}") + + # Generate test file + if mode == "api": + test_file = generate_api_tests(data, out_dir) + else: + test_file = generate_ui_tests(data, out_dir) + + print(f"[OK] Test file -> {test_file}") + print(f"\nInstall deps and run:") + if mode == "api": + print(f" pip install pytest requests") + else: + print(f" pip install pytest playwright && playwright install chromium") + print(f" pytest {test_file} -v --tb=short") + print(f" # Or use run_and_report.py for HTML report") + + +if __name__ == "__main__": + main() diff --git a/skills/test/test-case-runner/scripts/run_and_report.py b/skills/test/test-case-runner/scripts/run_and_report.py new file mode 100644 index 0000000..b327d61 --- /dev/null +++ b/skills/test/test-case-runner/scripts/run_and_report.py @@ -0,0 +1,454 @@ +#!/usr/bin/env python3 +""" +Run pytest tests and generate a rich HTML test report. + +Usage: + python run_and_report.py [report.html] + +Features: + - Runs pytest programmatically and captures results + - Generates a styled, self-contained HTML report + - Shows pass/fail/error/skip stats with charts + - Includes test output, duration, and failure details +""" + +import sys +import os +import json +import subprocess +import tempfile +import re +from datetime import datetime +from pathlib import Path + + +# ───────────────────────────────────────────── +# Run pytest with JSON output +# ───────────────────────────────────────────── + +def run_pytest(test_target: str) -> dict: + """Run pytest and return parsed JSON results.""" + with tempfile.NamedTemporaryFile(suffix=".json", delete=False, mode="w") as tf: + json_path = tf.name + + cmd = [ + sys.executable, "-m", "pytest", + test_target, + "--tb=short", + "-v", + f"--json-report", + f"--json-report-file={json_path}", + "--no-header", + ] + + # Try with pytest-json-report; fall back to custom collection + result = subprocess.run(cmd, capture_output=True, text=True, encoding="utf-8") + + # Check if JSON report was created + if Path(json_path).exists(): + try: + with open(json_path, "r", encoding="utf-8") as f: + data = json.load(f) + os.unlink(json_path) + data["_stdout"] = result.stdout + data["_stderr"] = result.stderr + data["_returncode"] = result.returncode + return data + except Exception: + pass + + # Fallback: parse stdout + os.unlink(json_path) + return _parse_pytest_stdout(result.stdout, result.stderr, result.returncode) + + +def _parse_pytest_stdout(stdout: str, stderr: str, returncode: int) -> dict: + """Parse pytest -v output into structured data when JSON plugin unavailable.""" + tests = [] + lines = stdout.splitlines() + + for line in lines: + # Match lines like: test_foo.py::TestClass::test_bar PASSED [ 10%] + m = re.match( + r"^([\w/\\.\-]+::[\w:]+)\s+(PASSED|FAILED|ERROR|SKIPPED|XFAIL|XPASS)" + r"(?:\s+\[[\s\d]+%\])?", + line.strip() + ) + if m: + node_id, outcome = m.group(1), m.group(2) + tests.append({ + "nodeid": node_id, + "outcome": outcome.lower(), + "duration": 0.0, + "longrepr": "", + }) + + # Extract failure details + fail_section = False + current_fail = None + fail_lines = [] + for line in lines: + if line.startswith("FAILED") or line.startswith("ERROR"): + fail_section = True + if re.match(r"_{5,}", line): + if current_fail and fail_lines: + for t in tests: + if t["nodeid"] == current_fail: + t["longrepr"] = "\n".join(fail_lines) + fail_lines = [] + current_fail = None + if fail_section: + m2 = re.match(r"_{5,}\s+([\w/\\.\-]+::[\w:]+)\s+_{5,}", line) + if m2: + current_fail = m2.group(1) + else: + fail_lines.append(line) + + # Summary stats + passed = sum(1 for t in tests if t["outcome"] == "passed") + failed = sum(1 for t in tests if t["outcome"] in ("failed", "error")) + skipped = sum(1 for t in tests if t["outcome"] == "skipped") + + return { + "tests": tests, + "summary": { + "passed": passed, + "failed": failed, + "error": 0, + "skipped": skipped, + "total": len(tests), + }, + "_stdout": stdout, + "_stderr": stderr, + "_returncode": returncode, + "_fallback": True, + } + + +# ───────────────────────────────────────────── +# HTML Report Generator +# ───────────────────────────────────────────── + +HTML_TEMPLATE = """\ + + + + + +测试报告 - {title} + + + +

自动化测试报告

+

测试目标:{title}  |  执行时间:{run_time}  |  耗时:{duration}s

+ +
+
{total}
总用例
+
{passed}
通过
+
{failed}
失败
+
{error}
错误
+
{skipped}
跳过
+
+ +
+
+
+
+
+
+
+
+ 通过 {pct_pass:.1f}% + 失败 {pct_fail:.1f}% + 错误 {pct_error:.1f}% + 跳过 {pct_skip:.1f}% +
+
+ + + + + + + + + + + + +{rows} + +
#测试节点状态耗时(s)详情
+ +{console_section} + + + + +""" + +ROW_TEMPLATE = """\ + + {idx} + {nodeid} + {label} + {duration} + {detail} +""" + +OUTCOME_MAP = { + "passed": ("pass", "通过"), + "failed": ("fail", "失败"), + "error": ("error", "错误"), + "skipped": ("skip", "跳过"), + "xfail": ("skip", "预期失败"), + "xpass": ("pass", "意外通过"), +} + + +def _colorize_console(text): + """Add CSS spans to console output lines.""" + result = [] + for line in text.splitlines(): + if " PASSED" in line or "passed" in line.lower(): + result.append(f'{_esc(line)}') + elif " FAILED" in line or " ERROR" in line: + result.append(f'{_esc(line)}') + elif "Warning" in line or "warning" in line: + result.append(f'{_esc(line)}') + else: + result.append(_esc(line)) + return "\n".join(result) + + +def _esc(s): + return s.replace("&", "&").replace("<", "<").replace(">", ">") + + +def build_html_report(pytest_data: dict, target: str, duration: float) -> str: + summary = pytest_data.get("summary", {}) + tests = pytest_data.get("tests", []) + + total = summary.get("total", len(tests)) + passed = summary.get("passed", 0) + failed = summary.get("failed", 0) + error = summary.get("error", 0) + skipped = summary.get("skipped", 0) + + # Recalculate from tests list if needed + if total == 0 and tests: + total = len(tests) + passed = sum(1 for t in tests if t.get("outcome") == "passed") + failed = sum(1 for t in tests if t.get("outcome") in ("failed",)) + error = sum(1 for t in tests if t.get("outcome") == "error") + skipped = sum(1 for t in tests if t.get("outcome") == "skipped") + + safe_total = total if total > 0 else 1 + pct_pass = passed / safe_total * 100 + pct_fail = failed / safe_total * 100 + pct_error = error / safe_total * 100 + pct_skip = skipped / safe_total * 100 + + rows = [] + for i, t in enumerate(tests, 1): + outcome = t.get("outcome", "unknown") + css, label = OUTCOME_MAP.get(outcome, ("skip", outcome)) + dur = t.get("duration", 0) + dur_str = f"{dur:.3f}" if dur else "-" + longrepr = t.get("longrepr", "") or t.get("call", {}).get("longrepr", "") + + if longrepr: + detail = ( + f'' + f'
{_esc(str(longrepr))}
' + ) + else: + detail = "" + + rows.append(ROW_TEMPLATE.format( + idx=i, + nodeid=_esc(t.get("nodeid", "")), + css=css, label=label, + duration=dur_str, + detail=detail, + )) + + stdout = pytest_data.get("_stdout", "") + if stdout.strip(): + console_section = ( + '
' + '

控制台输出

' + f'
{_colorize_console(stdout)}
' + '
' + ) + else: + console_section = "" + + return HTML_TEMPLATE.format( + title=_esc(target), + run_time=datetime.now().strftime("%Y-%m-%d %H:%M:%S"), + duration=f"{duration:.1f}", + total=total, passed=passed, failed=failed, + error=error, skipped=skipped, + pct_pass=pct_pass, pct_fail=pct_fail, + pct_error=pct_error, pct_skip=pct_skip, + rows="\n".join(rows), + console_section=console_section, + ) + + +# ───────────────────────────────────────────── +# Main +# ───────────────────────────────────────────── + +def main(): + if len(sys.argv) < 2: + print("Usage: python run_and_report.py [report.html]") + sys.exit(1) + + target = sys.argv[1] + out_html = sys.argv[2] if len(sys.argv) > 2 else None + + if not Path(target).exists(): + print(f"[ERROR] Path not found: {target}") + sys.exit(1) + + if out_html is None: + base = Path(target) + if base.is_dir(): + out_html = str(base / "test_report.html") + else: + out_html = str(base.parent / "test_report.html") + + # Install pytest-json-report if available + subprocess.run( + [sys.executable, "-m", "pip", "install", "pytest-json-report", "-q"], + capture_output=True + ) + + print(f"[INFO] Running tests: {target}") + start = datetime.now() + results = run_pytest(target) + duration = (datetime.now() - start).total_seconds() + + summary = results.get("summary", {}) + total = summary.get("total", len(results.get("tests", []))) + passed = summary.get("passed", 0) + failed = summary.get("failed", 0) + summary.get("error", 0) + + print(f"[INFO] Results: {total} total, {passed} passed, {failed} failed " + f"({duration:.1f}s)") + + html = build_html_report(results, target, duration) + Path(out_html).write_text(html, encoding="utf-8") + print(f"[OK] HTML report -> {out_html}") + + return 0 if results.get("_returncode", 1) == 0 else 1 + + +if __name__ == "__main__": + sys.exit(main())