Commit 00d02cb3 authored by Johannes Bechberger's avatar Johannes Bechberger

Add lexer tests and fix diff test result

parent c1b6f48b
...@@ -12,14 +12,15 @@ such code as it probably helps the other teams (and could later be integrated in ...@@ -12,14 +12,15 @@ such code as it probably helps the other teams (and could later be integrated in
Test modes Test modes
---------- ----------
The test cases are divided in 3 'modes': The test cases are divided in 5 'modes':
- __lexer__: Test cases that check the lexed token (and their correct output)
- __syntax__: Test cases that just check whether `./run --parsecheck` accepts as correct or rejects - __syntax__: Test cases that just check whether `./run --parsecheck` accepts as correct or rejects
them. them.
- __ast__: Test cases that check the generated ast. - __ast__: Test cases that check the generated ast.
- __semantic__: Test cases that check semantic checking of MiniJava programs - __semantic__: Test cases that check semantic checking of MiniJava programs
- __exec__: Test cases that check the correct compilation of MiniJava programs. - __exec__: Test cases that check the correct compilation of MiniJava programs.
_Only the syntax mode is currently usable, but the other three will follow._ _Only the lexer, syntax and ast mode is currently usable, but the others will follow._
The test different test cases for each mode are located in a folder with the same name. The test different test cases for each mode are located in a folder with the same name.
The default directory that contains all test folders is `tests`. The default directory that contains all test folders is `tests`.
...@@ -29,6 +30,22 @@ The different types a test cases are differentiated by their file endings. ...@@ -29,6 +30,22 @@ The different types a test cases are differentiated by their file endings.
Side note: An error code greater than 0 should result in an errror message on error output containing the word `error`. Side note: An error code greater than 0 should result in an errror message on error output containing the word `error`.
Test types for the lexer mode
------------------------------
<table>
<tr><th>File ending(s) of test cases</th><th>Expected behaviour to complete a test of this type</th></tr>
<tr>
<td><code>.valid.mj</code> <code>.mj</code>
<td>Return code is <code>0</code> and the output matches the expected output (located in the file `[test file].out`</td>
</tr>
<tr>
<td><code>.invalid.mj</code>
<td>Return code is <code>&gt; 0</code> and the error output contains the word <code>error</code></td>
</tr>
</table>
Test types for the syntax mode Test types for the syntax mode
------------------------------ ------------------------------
...@@ -72,7 +89,7 @@ Test runner ...@@ -72,7 +89,7 @@ Test runner
### Requirements ### Requirements
The following programs are required (and executable by simply calling their names). The following programs are required (and executable by simply calling their names).
- `python3` (at least Python3.3) - `python3` (at least Python3.3)
- `javac` and `java` (for `.java` test cases) - `javac` and `java` (for `.java` test cases)three
### Installation ### Installation
...@@ -92,13 +109,14 @@ Output of the `./mjt.py --help` ...@@ -92,13 +109,14 @@ Output of the `./mjt.py --help`
``` ```
usage: mjt.py [-h] [--only_incorrect_tests] [--produce_no_reports] usage: mjt.py [-h] [--only_incorrect_tests] [--produce_no_reports]
[--produce_all_reports] [--parallel] [--produce_all_reports] [--parallel]
[--output_no_incorrect_reports] [--log_level LOG_LEVEL] [--output_no_incorrect_reports] [--ci_testing]
{syntax,ast,semantic,exec} MJ_RUN [--log_level LOG_LEVEL]
{all,lexer,syntax,ast,semantic,exec} MJ_RUN
MiniJava test runner MiniJava test runner
positional arguments: positional arguments:
{syntax,ast,semantic,exec} {all,lexer,syntax,ast,semantic,exec}
What do you want to test? What do you want to test?
MJ_RUN Command to run your MiniJava implementation, e.g. MJ_RUN Command to run your MiniJava implementation, e.g.
`mj/run`, can be omitted by assigning the environment `mj/run`, can be omitted by assigning the environment
...@@ -115,6 +133,10 @@ optional arguments: ...@@ -115,6 +133,10 @@ optional arguments:
--parallel Run the tests in parallel --parallel Run the tests in parallel
--output_no_incorrect_reports --output_no_incorrect_reports
Output the long report for every incorrect test case Output the long report for every incorrect test case
--ci_testing In mode X the succeeding test cases of later
modes/phases should also succeed in this mode, and
failing test cases of prior modes/phases should also
fail in this phase.
--log_level LOG_LEVEL --log_level LOG_LEVEL
Logging level (error, warn, info or debug) Logging level (error, warn, info or debug)
``` ```
......
...@@ -3,6 +3,7 @@ import os ...@@ -3,6 +3,7 @@ import os
import sys import sys
import argparse import argparse
from typing import Dict from typing import Dict
from datetime import datetime
from mjtest.environment import TestMode, Environment, TEST_MODES from mjtest.environment import TestMode, Environment, TEST_MODES
from mjtest.test.tests import TestSuite from mjtest.test.tests import TestSuite
...@@ -47,6 +48,9 @@ if True:#__name__ == '__main__': ...@@ -47,6 +48,9 @@ if True:#__name__ == '__main__':
help="Run the tests in parallel") help="Run the tests in parallel")
parser.add_argument("--output_no_incorrect_reports", action="store_true", default=False, parser.add_argument("--output_no_incorrect_reports", action="store_true", default=False,
help="Output the long report for every incorrect test case") help="Output the long report for every incorrect test case")
parser.add_argument("--ci_testing", action="store_true", default=False,
help="In mode X the succeeding test cases of later modes/phases should also succeed in "
"this mode, and failing test cases of prior modes/phases should also fail in this phase.")
#parser.add_argument("--timeout", action="store_const", default=30, const="timeout", #parser.add_argument("--timeout", action="store_const", default=30, const="timeout",
# help="Abort a program after TIMEOUT seconds") # help="Abort a program after TIMEOUT seconds")
#parser.add_argument("--report_dir", action="store_const", default="", const="report_dir", #parser.add_argument("--report_dir", action="store_const", default="", const="report_dir",
...@@ -73,7 +77,9 @@ if True:#__name__ == '__main__': ...@@ -73,7 +77,9 @@ if True:#__name__ == '__main__':
failed += ret.failed failed += ret.failed
count += ret.count count += ret.count
if args["mode"] == "all": if args["mode"] == "all":
report_subdir = datetime.now().strftime("%d-%m-%y_%H-%M-%S")
for mode in TEST_MODES: for mode in TEST_MODES:
args["report_subdir"] = report_subdir + "_" + mode
cprint("Run {} tests".format(mode), attrs=["bold"]) cprint("Run {} tests".format(mode), attrs=["bold"])
args["mode"] = mode args["mode"] = mode
run(args) run(args)
......
...@@ -36,7 +36,8 @@ class Environment: ...@@ -36,7 +36,8 @@ class Environment:
only_incorrect_tests: bool = False, parallel: bool = False, only_incorrect_tests: bool = False, parallel: bool = False,
timeout: int = 30, report_dir: str = "", log_level: str = "warn", timeout: int = 30, report_dir: str = "", log_level: str = "warn",
produce_no_reports: bool = True, output_no_incorrect_reports: bool = False, produce_no_reports: bool = True, output_no_incorrect_reports: bool = False,
produce_all_reports: bool = False): produce_all_reports: bool = False, report_subdir: str = None,
ci_testing: bool = False):
self.mode = mode self.mode = mode
self.mj_run_cmd = os.path.realpath(mj_run) self.mj_run_cmd = os.path.realpath(mj_run)
...@@ -70,13 +71,14 @@ class Environment: ...@@ -70,13 +71,14 @@ class Environment:
os.mkdir(self.report_dir) os.mkdir(self.report_dir)
except IOError: except IOError:
pass pass
self.report_dir = os.path.join(self.report_dir, datetime.now().strftime("%d-%m-%y_%H-%M-%S")) self.report_dir = os.path.join(self.report_dir, report_subdir or datetime.now().strftime("%d-%m-%y_%H-%M-%S"))
else: else:
self.report_dir = None self.report_dir = None
logging.basicConfig(level=self.LOG_LEVELS[log_level]) logging.basicConfig(level=self.LOG_LEVELS[log_level])
self.produce_reports = not produce_no_reports # type: bool self.produce_reports = not produce_no_reports # type: bool
self.output_incorrect_reports = not output_no_incorrect_reports self.output_incorrect_reports = not output_no_incorrect_reports
self.produce_all_reports = produce_all_reports self.produce_all_reports = produce_all_reports
self.ci_testing = ci_testing
def create_tmpfile(self) -> str: def create_tmpfile(self) -> str:
return os.path.join(self.tmp_dir, str(os.times())) return os.path.join(self.tmp_dir, str(os.times()))
......
import shutil, logging import shutil, logging
from mjtest.environment import Environment, TestMode from mjtest.environment import Environment, TestMode
from mjtest.test.tests import TestCase, BasicDiffTestResult from mjtest.test.tests import TestCase, BasicDiffTestResult, BasicTestResult
from os import path from os import path
_LOG = logging.getLogger("tests") _LOG = logging.getLogger("tests")
...@@ -27,14 +27,15 @@ class ASTDiffTest(TestCase): ...@@ -27,14 +27,15 @@ class ASTDiffTest(TestCase):
def run(self) -> BasicDiffTestResult: def run(self) -> BasicDiffTestResult:
out, err, rtcode = self.env.run_mj_command(self.MODE, self.file) out, err, rtcode = self.env.run_mj_command(self.MODE, self.file)
exp_out = "" exp_out = ""
if rtcode > 0 and self.should_succeed(): if rtcode == 0 and self.should_succeed():
if self._has_expected_output_file and self.type == self.MODE and self.env.mode == self.MODE: if self._has_expected_output_file and self.type == self.MODE and self.env.mode == self.MODE:
with open(self._expected_output_file, "r") as f: with open(self._expected_output_file, "r") as f:
exp_out = f.read() exp_out = f.read()
#else: #else:
# _LOG.error("Expected output file for test case {}:{} is missing.".format(self.MODE, self.short_name())) # _LOG.error("Expected output file for test case {}:{} is missing.".format(self.MODE, self.short_name()))
return BasicDiffTestResult(self, rtcode, out.decode(), err.decode(), exp_out) if self.type == self.MODE and self.env.mode == self.MODE:
return BasicDiffTestResult(self, rtcode, out.decode(), err.decode(), exp_out)
return BasicTestResult(self, rtcode, out.decode(), err.decode())
class LexerDiffTest(ASTDiffTest): class LexerDiffTest(ASTDiffTest):
......
...@@ -71,7 +71,7 @@ class TestSuite: ...@@ -71,7 +71,7 @@ class TestSuite:
del self.test_cases[mode] del self.test_cases[mode]
def _log_file_for_type(self, type: str): def _log_file_for_type(self, type: str):
return join(self.env.test_dir, type, ".mjtest_correct_testcases") return join(self.env.test_dir, type, ".mjtest_correct_testcases_" + self.env.mode)
def _add_correct_test_case(self, test_case: 'TestCase'): def _add_correct_test_case(self, test_case: 'TestCase'):
self.correct_test_cases[test_case.type].add(basename(test_case.file)) self.correct_test_cases[test_case.type].add(basename(test_case.file))
...@@ -197,9 +197,12 @@ class TestCase: ...@@ -197,9 +197,12 @@ class TestCase:
def can_run(self, mode: str = "") -> bool: def can_run(self, mode: str = "") -> bool:
mode = mode or self.env.mode mode = mode or self.env.mode
types = TEST_MODES[TEST_MODES.index(self.env.mode):] types = TEST_MODES[TEST_MODES.index(self.env.mode):]
return self.type == mode or \ if self.env.ci_testing:
(self.type in types and self.should_succeed()) or \ return self.type == mode or \
(self.type not in types and not self.should_succeed()) (self.type in types and self.should_succeed()) or \
(self.type not in types and not self.should_succeed())
else:
return self.type == mode
def run(self) -> 'TestResult': def run(self) -> 'TestResult':
raise NotImplementedError() raise NotImplementedError()
...@@ -330,13 +333,13 @@ class BasicDiffTestResult(BasicTestResult): ...@@ -330,13 +333,13 @@ class BasicDiffTestResult(BasicTestResult):
def __init__(self, test_case: TestCase, error_code: int, output: str, error_output: str, expected_output: str): def __init__(self, test_case: TestCase, error_code: int, output: str, error_output: str, expected_output: str):
super().__init__(test_case, error_code, output, error_output) super().__init__(test_case, error_code, output, error_output)
self.expected_output = expected_output self.expected_output = expected_output
self._is_output_correct = self.expected_output.strip() == self.output self._is_output_correct = self.expected_output.strip() == self.output.strip()
if self.is_correct(): if self.is_correct():
self.add_additional_text("Expected and actual output", self.output) self.add_additional_text("Expected and actual output", self.output)
elif self.succeeded() and self.test_case.should_succeed(): elif self.succeeded() and self.test_case.should_succeed():
self.add_additional_text("Diff[expected output, actual output]", self._output_diff()) self.add_additional_text("Diff[expected output, actual output]", self._output_diff())
self.add_additional_text("Expected output", self.expected_output) self.add_additional_text("Expected output", self.expected_output)
self.add_additional_text("Actual output", self.output) #self.add_additional_text("Actual output", self.output)
def is_correct(self): def is_correct(self):
if self.succeeded(): if self.succeeded():
...@@ -345,7 +348,7 @@ class BasicDiffTestResult(BasicTestResult): ...@@ -345,7 +348,7 @@ class BasicDiffTestResult(BasicTestResult):
return super().is_correct() and self._contains_error_str return super().is_correct() and self._contains_error_str
def _output_diff(self) -> str: def _output_diff(self) -> str:
return difflib.Differ().compare(self.expected_output, self.output) return "".join(difflib.Differ().compare(self.expected_output.splitlines(True), self.output.splitlines(True)))
def is_output_correct(self) -> str: def is_output_correct(self) -> str:
return self._is_output_correct return self._is_output_correct
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment