Skip to content
GitLab
Projects
Groups
Snippets
Help
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
M
mjtest
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
1
Issues
1
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Operations
Operations
Incidents
Environments
Packages & Registries
Packages & Registries
Package Registry
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
IPDSnelting
mjtest
Commits
00d02cb3
Commit
00d02cb3
authored
Nov 07, 2016
by
Johannes Bechberger
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add lexer tests and fix diff test result
parent
c1b6f48b
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
53 additions
and
19 deletions
+53
-19
README.mdwn
README.mdwn
+28
-6
mjtest/cli.py
mjtest/cli.py
+6
-0
mjtest/environment.py
mjtest/environment.py
+4
-2
mjtest/test/ast_tests.py
mjtest/test/ast_tests.py
+5
-4
mjtest/test/tests.py
mjtest/test/tests.py
+10
-7
No files found.
README.mdwn
View file @
00d02cb3
...
...
@@ -12,14 +12,15 @@ such code as it probably helps the other teams (and could later be integrated in
Test modes
----------
The test cases are divided in 3 'modes':
The test cases are divided in 5 'modes':
- __lexer__: Test cases that check the lexed token (and their correct output)
- __syntax__: Test cases that just check whether `./run --parsecheck` accepts as correct or rejects
them.
- __ast__: Test cases that check the generated ast.
- __semantic__: Test cases that check semantic checking of MiniJava programs
- __exec__: Test cases that check the correct compilation of MiniJava programs.
_Only the
syntax mode is currently usable, but the other three
will follow._
_Only the
lexer, syntax and ast mode is currently usable, but the others
will follow._
The test different test cases for each mode are located in a folder with the same name.
The default directory that contains all test folders is `tests`.
...
...
@@ -29,6 +30,22 @@ The different types a test cases are differentiated by their file endings.
Side note: An error code greater than 0 should result in an errror message on error output containing the word `error`.
Test types for the lexer mode
------------------------------
<table>
<tr><th>File ending(s) of test cases</th><th>Expected behaviour to complete a test of this type</th></tr>
<tr>
<td><code>.valid.mj</code> <code>.mj</code>
<td>Return code is <code>0</code> and the output matches the expected output (located in the file `[test file].out`</td>
</tr>
<tr>
<td><code>.invalid.mj</code>
<td>Return code is <code>> 0</code> and the error output contains the word <code>error</code></td>
</tr>
</table>
Test types for the syntax mode
------------------------------
...
...
@@ -72,7 +89,7 @@ Test runner
### Requirements
The following programs are required (and executable by simply calling their names).
- `python3` (at least Python3.3)
- `javac` and `java` (for `.java` test cases)
- `javac` and `java` (for `.java` test cases)
three
### Installation
...
...
@@ -92,13 +109,14 @@ Output of the `./mjt.py --help`
```
usage: mjt.py [-h] [--only_incorrect_tests] [--produce_no_reports]
[--produce_all_reports] [--parallel]
[--output_no_incorrect_reports] [--log_level LOG_LEVEL]
{syntax,ast,semantic,exec} MJ_RUN
[--output_no_incorrect_reports] [--ci_testing]
[--log_level LOG_LEVEL]
{all,lexer,syntax,ast,semantic,exec} MJ_RUN
MiniJava test runner
positional arguments:
{syntax,ast,semantic,exec}
{
all,lexer,
syntax,ast,semantic,exec}
What do you want to test?
MJ_RUN Command to run your MiniJava implementation, e.g.
`mj/run`, can be omitted by assigning the environment
...
...
@@ -115,6 +133,10 @@ optional arguments:
--parallel Run the tests in parallel
--output_no_incorrect_reports
Output the long report for every incorrect test case
--ci_testing In mode X the succeeding test cases of later
modes/phases should also succeed in this mode, and
failing test cases of prior modes/phases should also
fail in this phase.
--log_level LOG_LEVEL
Logging level (error, warn, info or debug)
```
...
...
mjtest/cli.py
View file @
00d02cb3
...
...
@@ -3,6 +3,7 @@ import os
import
sys
import
argparse
from
typing
import
Dict
from
datetime
import
datetime
from
mjtest.environment
import
TestMode
,
Environment
,
TEST_MODES
from
mjtest.test.tests
import
TestSuite
...
...
@@ -47,6 +48,9 @@ if True:#__name__ == '__main__':
help
=
"Run the tests in parallel"
)
parser
.
add_argument
(
"--output_no_incorrect_reports"
,
action
=
"store_true"
,
default
=
False
,
help
=
"Output the long report for every incorrect test case"
)
parser
.
add_argument
(
"--ci_testing"
,
action
=
"store_true"
,
default
=
False
,
help
=
"In mode X the succeeding test cases of later modes/phases should also succeed in "
"this mode, and failing test cases of prior modes/phases should also fail in this phase."
)
#parser.add_argument("--timeout", action="store_const", default=30, const="timeout",
# help="Abort a program after TIMEOUT seconds")
#parser.add_argument("--report_dir", action="store_const", default="", const="report_dir",
...
...
@@ -73,7 +77,9 @@ if True:#__name__ == '__main__':
failed
+=
ret
.
failed
count
+=
ret
.
count
if
args
[
"mode"
]
==
"all"
:
report_subdir
=
datetime
.
now
().
strftime
(
"%d-%m-%y_%H-%M-%S"
)
for
mode
in
TEST_MODES
:
args
[
"report_subdir"
]
=
report_subdir
+
"_"
+
mode
cprint
(
"Run {} tests"
.
format
(
mode
),
attrs
=
[
"bold"
])
args
[
"mode"
]
=
mode
run
(
args
)
...
...
mjtest/environment.py
View file @
00d02cb3
...
...
@@ -36,7 +36,8 @@ class Environment:
only_incorrect_tests
:
bool
=
False
,
parallel
:
bool
=
False
,
timeout
:
int
=
30
,
report_dir
:
str
=
""
,
log_level
:
str
=
"warn"
,
produce_no_reports
:
bool
=
True
,
output_no_incorrect_reports
:
bool
=
False
,
produce_all_reports
:
bool
=
False
):
produce_all_reports
:
bool
=
False
,
report_subdir
:
str
=
None
,
ci_testing
:
bool
=
False
):
self
.
mode
=
mode
self
.
mj_run_cmd
=
os
.
path
.
realpath
(
mj_run
)
...
...
@@ -70,13 +71,14 @@ class Environment:
os
.
mkdir
(
self
.
report_dir
)
except
IOError
:
pass
self
.
report_dir
=
os
.
path
.
join
(
self
.
report_dir
,
datetime
.
now
().
strftime
(
"%d-%m-%y_%H-%M-%S"
))
self
.
report_dir
=
os
.
path
.
join
(
self
.
report_dir
,
report_subdir
or
datetime
.
now
().
strftime
(
"%d-%m-%y_%H-%M-%S"
))
else
:
self
.
report_dir
=
None
logging
.
basicConfig
(
level
=
self
.
LOG_LEVELS
[
log_level
])
self
.
produce_reports
=
not
produce_no_reports
# type: bool
self
.
output_incorrect_reports
=
not
output_no_incorrect_reports
self
.
produce_all_reports
=
produce_all_reports
self
.
ci_testing
=
ci_testing
def
create_tmpfile
(
self
)
->
str
:
return
os
.
path
.
join
(
self
.
tmp_dir
,
str
(
os
.
times
()))
...
...
mjtest/test/ast_tests.py
View file @
00d02cb3
import
shutil
,
logging
from
mjtest.environment
import
Environment
,
TestMode
from
mjtest.test.tests
import
TestCase
,
BasicDiffTestResult
from
mjtest.test.tests
import
TestCase
,
BasicDiffTestResult
,
BasicTestResult
from
os
import
path
_LOG
=
logging
.
getLogger
(
"tests"
)
...
...
@@ -27,14 +27,15 @@ class ASTDiffTest(TestCase):
def
run
(
self
)
->
BasicDiffTestResult
:
out
,
err
,
rtcode
=
self
.
env
.
run_mj_command
(
self
.
MODE
,
self
.
file
)
exp_out
=
""
if
rtcode
>
0
and
self
.
should_succeed
():
if
rtcode
==
0
and
self
.
should_succeed
():
if
self
.
_has_expected_output_file
and
self
.
type
==
self
.
MODE
and
self
.
env
.
mode
==
self
.
MODE
:
with
open
(
self
.
_expected_output_file
,
"r"
)
as
f
:
exp_out
=
f
.
read
()
#else:
# _LOG.error("Expected output file for test case {}:{} is missing.".format(self.MODE, self.short_name()))
return
BasicDiffTestResult
(
self
,
rtcode
,
out
.
decode
(),
err
.
decode
(),
exp_out
)
if
self
.
type
==
self
.
MODE
and
self
.
env
.
mode
==
self
.
MODE
:
return
BasicDiffTestResult
(
self
,
rtcode
,
out
.
decode
(),
err
.
decode
(),
exp_out
)
return
BasicTestResult
(
self
,
rtcode
,
out
.
decode
(),
err
.
decode
())
class
LexerDiffTest
(
ASTDiffTest
):
...
...
mjtest/test/tests.py
View file @
00d02cb3
...
...
@@ -71,7 +71,7 @@ class TestSuite:
del
self
.
test_cases
[
mode
]
def
_log_file_for_type
(
self
,
type
:
str
):
return
join
(
self
.
env
.
test_dir
,
type
,
".mjtest_correct_testcases
"
)
return
join
(
self
.
env
.
test_dir
,
type
,
".mjtest_correct_testcases
_"
+
self
.
env
.
mode
)
def
_add_correct_test_case
(
self
,
test_case
:
'TestCase'
):
self
.
correct_test_cases
[
test_case
.
type
].
add
(
basename
(
test_case
.
file
))
...
...
@@ -197,9 +197,12 @@ class TestCase:
def
can_run
(
self
,
mode
:
str
=
""
)
->
bool
:
mode
=
mode
or
self
.
env
.
mode
types
=
TEST_MODES
[
TEST_MODES
.
index
(
self
.
env
.
mode
):]
return
self
.
type
==
mode
or
\
(
self
.
type
in
types
and
self
.
should_succeed
())
or
\
(
self
.
type
not
in
types
and
not
self
.
should_succeed
())
if
self
.
env
.
ci_testing
:
return
self
.
type
==
mode
or
\
(
self
.
type
in
types
and
self
.
should_succeed
())
or
\
(
self
.
type
not
in
types
and
not
self
.
should_succeed
())
else
:
return
self
.
type
==
mode
def
run
(
self
)
->
'TestResult'
:
raise
NotImplementedError
()
...
...
@@ -330,13 +333,13 @@ class BasicDiffTestResult(BasicTestResult):
def
__init__
(
self
,
test_case
:
TestCase
,
error_code
:
int
,
output
:
str
,
error_output
:
str
,
expected_output
:
str
):
super
().
__init__
(
test_case
,
error_code
,
output
,
error_output
)
self
.
expected_output
=
expected_output
self
.
_is_output_correct
=
self
.
expected_output
.
strip
()
==
self
.
output
self
.
_is_output_correct
=
self
.
expected_output
.
strip
()
==
self
.
output
.
strip
()
if
self
.
is_correct
():
self
.
add_additional_text
(
"Expected and actual output"
,
self
.
output
)
elif
self
.
succeeded
()
and
self
.
test_case
.
should_succeed
():
self
.
add_additional_text
(
"Diff[expected output, actual output]"
,
self
.
_output_diff
())
self
.
add_additional_text
(
"Expected output"
,
self
.
expected_output
)
self
.
add_additional_text
(
"Actual output"
,
self
.
output
)
#
self.add_additional_text("Actual output", self.output)
def
is_correct
(
self
):
if
self
.
succeeded
():
...
...
@@ -345,7 +348,7 @@ class BasicDiffTestResult(BasicTestResult):
return
super
().
is_correct
()
and
self
.
_contains_error_str
def
_output_diff
(
self
)
->
str
:
return
difflib
.
Differ
().
compare
(
self
.
expected_output
,
self
.
output
)
return
""
.
join
(
difflib
.
Differ
().
compare
(
self
.
expected_output
.
splitlines
(
True
),
self
.
output
.
splitlines
(
True
))
)
def
is_output_correct
(
self
)
->
str
:
return
self
.
_is_output_correct
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment