Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
146 commits
Select commit Hold shift + click to select a range
2be9473
Ignore development benchmark
trtikm Sep 12, 2024
9f0fd38
Fixed project's name
trtikm Sep 14, 2024
f4fa236
SIlencing boost's wait_for 'deprecated' message
trtikm Sep 14, 2024
c3c9eda
Instrumenter: Using new LLVM's PassManager
trtikm Sep 14, 2024
9db751f
Updated README - added install of 32-bit std lib on Linux.
trtikm Sep 14, 2024
0bf0d04
feat: creating equation matrix
Sep 15, 2024
712a18d
Use static linking with boost.
trtikm Sep 16, 2024
9b56efe
Cleanup in CMakeLists files.
trtikm Sep 16, 2024
cfce1f7
Connection: Introduced 'medium' as parent of 'message' and 'shared_me…
trtikm Sep 16, 2024
f867b68
Cleanup in iomodels
trtikm Sep 16, 2024
723cbd3
Cleanup in fuzz_target
trtikm Sep 16, 2024
0925a72
Added missing default program option (data)
trtikm Sep 16, 2024
bff2095
Sensitivity: fixed bug - do not make sensitive bits which were not re…
trtikm Sep 16, 2024
1b61266
Improved output from fizzer
trtikm Sep 16, 2024
aeef0f8
feat: added interesting nodes set functionality
Sep 16, 2024
943287e
Benchmarks config hanged: stdin_replay_bytes_then_repeat_(85->zero)
trtikm Sep 16, 2024
9e64c89
Improved instrumentation
trtikm Sep 16, 2024
fccb831
Updated json of one benchmark
trtikm Sep 17, 2024
acfd5a1
Reducing randomness in fuzzer.
trtikm Sep 17, 2024
ae7af3a
feat: updating matrix with one equation
Sep 17, 2024
12a249b
feat: equation approximation
Sep 17, 2024
a0dcb86
feat: struct refactoring
Sep 17, 2024
a1831a3
feat: small changes
Sep 18, 2024
59e8f36
feat: doxygen comment for new functions
Sep 22, 2024
098fe2c
feat: doxygen comment for new functions
Sep 22, 2024
08d9f6b
feat: small changes and output operator
Sep 22, 2024
b9f28c0
feat: add timings
Sep 22, 2024
39322a9
feat: gradient descent
Sep 24, 2024
0dd48b3
fix: changed add_equation to update best_values correctly
Sep 24, 2024
4b7c888
chore: formatting and docstring
Sep 24, 2024
1f554f0
feat: compute iid_dependance only for iid_node
Sep 25, 2024
b775151
feat: refactoring of gradient descent computation
Sep 26, 2024
1e1ff49
feat: added docstring
Sep 26, 2024
514c58d
feat: compute mean depth for every best_value
Sep 27, 2024
e1f8142
feat: move iid_node_dependencies to separate file
Sep 29, 2024
9762859
feat: get possible depth
Sep 29, 2024
9ecaa16
feat: compute node counts
Sep 30, 2024
3355f43
feat: better mean computation
Oct 1, 2024
69f615b
feat: path creation
Oct 2, 2024
6316a9f
feat: path following
Oct 3, 2024
c186bbb
feat: update depth
Oct 12, 2024
365156e
feat: change names
Oct 12, 2024
75ba932
feat: add momentum parameter
Oct 15, 2024
75c699c
feat: mini-batch gradient descent
Oct 15, 2024
da59f6a
feat: changes from meeting
Oct 16, 2024
8cb3e1f
feat: some changes
Nov 1, 2024
2130557
feat: dependencies_by_loading
Nov 4, 2024
2df4053
feat: more outputting
Nov 5, 2024
faa30fe
feat: return if the gradient descent converged
Nov 5, 2024
959a511
feat: new gradient descent
Nov 15, 2024
83a183e
feat: new gradient descent
Nov 15, 2024
49bde51
feat: gradient descent convergence computation
Nov 15, 2024
d9d07f4
feat: gradient descent convergence computation
Nov 17, 2024
8b05d0d
feat: add locked columns
Nov 18, 2024
25f3198
feat: print gradient descent information
Nov 18, 2024
ce3e4e0
feat: outputing
Nov 26, 2024
dba09c7
feat: vector
Nov 27, 2024
96a0808
feat: vector computation
Dec 3, 2024
36677ab
feat: new file for analysis
Dec 21, 2024
a218698
feat: node processing
Dec 22, 2024
59bc263
feat: loop processing
Dec 22, 2024
abf0bad
feat: matrix methods
Dec 22, 2024
a38fa5b
feat: vector computation
Dec 22, 2024
a67f518
feat: enhance equation operations and add new path computation
Dec 23, 2024
73c6163
feat: add node_counts structure and compute_path_counts method for pa…
Dec 23, 2024
4cb1d5c
feat: improve path count handling
Dec 23, 2024
c395c30
feat: remove probabilities with both zeros
Dec 27, 2024
c604118
feat: add new IID testing benchmarks and update existing benchmark co…
Dec 27, 2024
3fd36aa
feat: add new IID testing benchmarks and update existing condition ch…
Jan 8, 2025
ce547c0
feat: add new benchmark for input cycle and update execution counts
Jan 8, 2025
67151e9
feat: update loop thresholds and execution counts in IID testing benc…
Jan 8, 2025
bd00c7f
feat: update execution counts and add original execution values in II…
Jan 10, 2025
9337689
feat: add new IID testing benchmarks for non-IID conditions
Jan 10, 2025
7775c12
feat: add new IID testing benchmark for condition checks with executi…
Jan 10, 2025
b46197e
feat: implement path_node_props and possible_path structures with dir…
Jan 10, 2025
c26cc77
feat: add ostream operators for path_node_props and possible_path str…
Jan 11, 2025
81510da
feat: enhance direction handling in path_node_props and equation_matr…
Jan 11, 2025
ed2b796
feat: enhance iid_node_dependence_props with const correctness and ad…
Jan 11, 2025
5e6997c
feat: add iid_vector_analysis and update fuzzer to include new analys…
Jan 19, 2025
da5f1ad
Merge commit 'acfd5a18a8dc7e98ee77a0b53026208176ea12fd' into jakub/ii…
Jan 19, 2025
af2d4d9
feat: update benchmark JSON files to include detailed output statisti…
Jan 19, 2025
db45978
fix: correct CMakeLists.txt formatting by removing unnecessary newline
Jan 19, 2025
7f2719a
feat: update after merge
Jan 21, 2025
902c833
feat: update output statistics in benchmark JSON files for improved a…
Jan 21, 2025
bd56a79
feat: add double scalar multiplication and update vector handling in …
Jan 22, 2025
066634b
feat: add new benchmarks for different predicates in IID testing
Jan 22, 2025
7445203
refactor: rename loop_head_direction to loop_head_end_direction and u…
Jan 22, 2025
bbd157f
refactor: renaming functions
Jan 22, 2025
4bd896a
feat: add support for programs with loops inside loops
Jan 23, 2025
be4cdc8
feat: generate more data after iid node is covered
Jan 25, 2025
4ed8652
feat: update generated test counts and execution limits in IID benchm…
Jan 25, 2025
3c16938
feat: generate more data from previous iiid nodes if there not enough…
Jan 25, 2025
bde3a88
feat: update execution counts in IID benchmark JSON files to correspo…
Jan 25, 2025
7ff172c
feat: add linear dependency check to equation
Jan 25, 2025
cb71d9c
feat: update execution counts in IID benchmark JSON files after addin…
Jan 25, 2025
b49b836
feat: adjust project declaration in CMakeLists.txt
Jan 25, 2025
2d898c2
feat: use namespace
Jan 26, 2025
be962fb
feat: add new benchmark for nested big values condition testing
Jan 27, 2025
d522cea
feat: add new benchmarks for loading loop tests with varied conditions
Jan 31, 2025
c6ad915
feat: update execution counts in loading loop benchmark JSON files
Jan 31, 2025
d9336e3
feat: changed dependencies by loading to handle to use loaded bits an…
Jan 31, 2025
e73f4c9
refactor: remove old dependencies_by_loading, that did not use sensit…
Jan 31, 2025
fe3f3fb
feat: add new benchmarks for input cycle tests with nested loops
Feb 1, 2025
d489f20
feat: update execution counts in benchmark JSON files for input cycle…
Feb 1, 2025
5a90565
feat: add generation if there are not enough data
Feb 1, 2025
7f5bf5a
feat: update execution counts in benchmark JSON files for IID testing
Feb 1, 2025
f4915a2
feat: feat: update leaf computation and state management
Feb 1, 2025
cbb2ae2
feat: support for multiple loop heads in one loop
Feb 8, 2025
8b05ab4
feat: add benchmarks for computation in loop and multiple loop heads
Feb 9, 2025
89da9f1
feat: update conditions in multiple loop heads benchmark
Feb 9, 2025
db813c7
feat: update location_id usage to id_type in loop and dependency stru…
Feb 9, 2025
a78df71
feat: add output of statistics
Feb 12, 2025
7730c41
feat: remove two definitions of loop properties
Feb 12, 2025
d117ccb
feat: renamed methods and types
Feb 12, 2025
b98c627
feat: update num_executions in benchmark input files
Feb 12, 2025
0389c46
feat: add saving data to json file
Feb 12, 2025
62f2e34
feat: update compute_loading_loops to include loop_endings parameter
Feb 12, 2025
350ef51
feat: correctly update ignored_nodes
Feb 26, 2025
39b9f85
feat: change the code to be more efficient - collect on whole trace, …
Mar 1, 2025
65abb9b
feat: make computation of loading loops more efficient
Mar 6, 2025
2301ad2
feat: change computation to start only if iid is present
Mar 11, 2025
713da71
feat: use context
Mar 12, 2025
02b968c
Merge branch 'jakub/iid_use_context' into jakub/iid_node_algorithm_de…
Mar 12, 2025
3527671
feat: computation of path little bit more efficient
Mar 12, 2025
575f7b9
feat: better integration into `select_iid_coverage_target`
Mar 12, 2025
fcbf8a1
feat: compute only when needed
Mar 15, 2025
0f47797
feat: better dumping
Mar 15, 2025
0c1ab64
feat: better dumping
Mar 15, 2025
09a954a
feat: compute vector with hits and submatrix only once
Mar 16, 2025
9a2052d
feat: improvement of path_id_direction_count
Mar 18, 2025
8184671
feat: add FloatComparator and enhance generation state management
Mar 28, 2025
6a68200
temp changes
Mar 28, 2025
f2a2a75
feat: update executions counts
Apr 12, 2025
c899ae2
feat: add ai generated benchmarks and benchmarks from testcomp
Apr 12, 2025
fba4413
feat: better node choosing
Apr 12, 2025
8fd147d
feat: formatter
Apr 12, 2025
2485fb1
fix: json properties
Apr 13, 2025
661e0eb
feat: implement better handling of loading loops
Apr 13, 2025
6c2b3da
feat: added more benchmarks
Apr 21, 2025
3574bc7
feat: remove bad benchmarks
May 10, 2025
a1cadd1
feat: select best configuration
May 10, 2025
9f469cf
feat: remove old files from development
May 10, 2025
b9870f4
feat: rename and remove benchmarks
May 10, 2025
72ad3b8
feat: rename benchmarks for clear description
May 10, 2025
9786aee
feat: update executions with the final version of IID Vector analysis
May 10, 2025
0cd8051
feat: remove include
May 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,5 @@
/CMakeSettings.json
/cmake-build-*
**/__pycache__
/benchmarks/pending/__debug__*
/benchmarks/pending/__debug__*
/benchmarks/fast/_.c
20 changes: 11 additions & 9 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
project(fizz)

cmake_minimum_required(VERSION 3.20 FATAL_ERROR)

project(fizzer)

macro(append_compiler_flags FLAGS)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FLAGS}")
endmacro()
Expand Down Expand Up @@ -70,12 +70,13 @@ find_package(Threads REQUIRED)

# find and add Boost
message("Searching for Boost library ...")
find_package(Boost 1.69 REQUIRED COMPONENTS filesystem)
set(Boost_USE_STATIC_LIBS ON)
find_package(Boost REQUIRED COMPONENTS filesystem)
message("Boost STATUS:
Includes ${Boost_INCLUDE_DIRS}
Libs ${Boost_LIBRARIES}
")
Libs ${Boost_LIBRARIES}")
include_directories(${Boost_INCLUDE_DIRS})
set(BOOST_LIST_OF_LIBRARIES_TO_LINK_WITH "${Boost_LIBRARIES}")

set(LLVM_EXPORT_SYMBOLS_FOR_PLUGINS "yes")
find_package(LLVM REQUIRED CONFIG)
Expand All @@ -92,8 +93,7 @@ message("LLVM STATUS:
Definitions ${LLVM_DEFINITIONS}
Includes ${LLVM_INCLUDE_DIRS}
Libraries ${LLVM_LIBRARY_DIRS}
Targets ${LLVM_TARGETS_TO_BUILD}"
)
Targets ${LLVM_TARGETS_TO_BUILD}")


if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
Expand All @@ -113,7 +113,9 @@ set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_SOURCE_DIR}/src")
# Add project specific sources
add_subdirectory(./src)

install(DIRECTORY ./benchmarks DESTINATION .)
install(FILES ./LICENSE.txt DESTINATION .)
if(FIZZ_BUILD_LIBS_32_BIT STREQUAL "No")
install(DIRECTORY ./benchmarks DESTINATION .)
install(FILES ./LICENSE.txt DESTINATION .)
endif()

message("Generating build files ...")
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,11 @@ start with the **age** project:
tasks, e.g., building benchmarks and killing non-terminating clients.
You only need to copy the file from the `setup` folder to the folder
`.vscode` folder.
- (Optional) If you also want to analyze 32-bit programs, then you must also
build 32-bit version of Fizzer's libraries. That is done automatically via
Fizzer's `build.sh` script. However, 32-bit version of C++ standard library
must be available in the C++ compiler. On Linux (Ubuntu) you can install
this library using: `sudo apt install g++-multilib`
- (optional) **SmartGit** Git GUI client: https://www.syntevo.com/smartgit/

## Downloading **SBT-Fizzer**
Expand Down
167 changes: 69 additions & 98 deletions benchmarks/benman.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,99 +106,69 @@ def _execute_and_check_output(self, cmdline : str, desired_output : str, work_di
self._execute(cmdline, os.path.dirname(desired_output) if work_dir is None else work_dir)
ASSUMPTION(os.path.isfile(desired_output), "_execute_and_check_output(): the output is missing: " + desired_output)

def _check_outcomes(self, config : dict, outcomes : dict):
checked_properties_and_comparators = {
"termination_type": "EQ",
"termination_reason": "EQ",
"num_executions": "LE",
"num_covered_branchings": "GE",
"covered_branchings": None,
"num_generated_tests": "GE",
"num_crashes": "GE",
"num_boundary_violations": "LE"
}

def is_valid(obtained, expected, op : str) -> bool:
if op == "EQ": return obtained == expected
if op == "NE": return obtained != expected
if op == "LT": return obtained < expected
if op == "LE": return obtained <= expected
if op == "GT": return obtained > expected
if op == "GE": return obtained >= expected
raise Exception("Invalid comparison operator '" + op + "'.")

for property, expected_value in config["results"].items():
ASSUMPTION(
property in checked_properties_and_comparators,
"Unsupported key '" + property + "' in the 'results' section of benchmark's config JSON file."
)
ASSUMPTION(
property in outcomes,
"The valid key '" + property + "' was not found in the 'outcomes' JSON file."
)
if type(expected_value) in [int, float, str]:
if not is_valid(outcomes[property], expected_value, checked_properties_and_comparators[property]):
@staticmethod
def _add_error_message(text: str, errors: list, properties: list):
errors.append(("In " + "/".join(properties) + ": " if len(properties) > 0 else "") + text)

@staticmethod
def _epsilon_for_property(properties):
if len(properties) == 0: return None
if properties[-1] == "num_executions": return 5.0
return None

@staticmethod
def _check_outcomes(obtained, expected, errors: list, properties = []) -> bool:
if type(expected) is dict:
if type(obtained) is not dict:
Benchmark._add_error_message("Mismatch in JSON structure. Expected dictionary.", errors, properties)
return False
result = True
for key in expected:
if key not in obtained:
Benchmark._add_error_message("Missing property: " + key, errors, properties)
return False
r = Benchmark._check_outcomes(obtained[key], expected[key], errors, properties + [key])
result = result and r
return result
elif type(expected) is list:
if type(obtained) is not list:
Benchmark._add_error_message("Mismatch in JSON structure. Expected list.", errors, properties)
return False
if len(obtained) != len(expected):
Benchmark._add_error_message("Different list size.", errors, properties)
return False
result = True
for i in range(min(len(obtained), len(expected))):
r = Benchmark._check_outcomes(obtained[i], expected[i], errors, properties)
result = result and r
return result
elif type(expected) in [int, float]:
if type(obtained) not in [int, float]:
Benchmark._add_error_message("Mismatch in JSON structure. Expected int or float.", errors, properties)
return False
epsilon = Benchmark._epsilon_for_property(properties)
if epsilon is None:
if obtained != expected:
Benchmark._add_error_message("Expected " + str(expected) + ", obtained " + str(obtained), errors, properties)
return False
else:
ASSUMPTION(property == "covered_branchings", "Only 'covered_branchings' can be a 'list' property to check.")
ASSUMPTION(len(expected_value) % 2 == 0, "Expected covered branchings list must have even number of elements.")
ASSUMPTION(len(outcomes[property]) % 2 == 0, "Obtained covered branchings list must have even number of elements.")
def get_branchings(seq : list) -> set:
result = set()
if len(seq) > 0:
for i in range(0, len(seq)-1, 2):
result.add((seq[i], seq[i+1]))
return result
expected_branchings = get_branchings(expected_value)
obtained_branchings = get_branchings(outcomes[property])
for x in expected_branchings:
if x not in obtained_branchings:
return False
return True

def _ok_stats_message(self, config : dict, outcomes : dict) -> str:
max_num_execution = config["results"]["num_executions"]
try:
num_execution = outcomes["num_executions"]
except Exception as e:
return "Unknown executions count"
percentage = 100.0 * num_execution / max_num_execution
return "#" + ("%.2f" % (percentage - 100)) + "%" if max_num_execution >= 100 and percentage < 90 else ""

def _fail_stats_message(self, config : dict, outcomes : dict) -> str:
expected_termination_type = config["results"]["termination_type"]
try:
termination_type = outcomes["termination_type"]
except Exception as e:
return "Unknown termination type"
if termination_type != expected_termination_type:
return termination_type

result = ""

expected_termination_reason = config["results"]["termination_reason"]
try:
termination_reason = outcomes["termination_reason"]
except Exception as e:
return "Unknown termination reason"
if termination_reason != expected_termination_reason:
result += termination_reason

max_num_execution = config["results"]["num_executions"]
try:
num_execution = outcomes["num_executions"]
except Exception as e:
return result + ("" if len(result) == 0 else ", ") + "unknown executions count"
if len(result) == 0 and num_execution > max_num_execution:
percentage = 100.0 * num_execution / max_num_execution
result += ("" if len(result) == 0 else ", ") + "#" + ("+%.2f" % (percentage - 100)) + "%"

return result

def _embrace_stats_message(self, msg : str) -> str:
if len(msg) == 0:
return msg
return "[" + msg + "]"
percentage = (100.0 * obtained) / expected if expected > 0 else 100.0 * obtained + 100.0
error = percentage - 100.0
if abs(error) > epsilon:
Benchmark._add_error_message("Expected " + str(expected) + ", obtained " + str(obtained) + " [error: " + ("%.2f" % error) + "%]", errors, properties)
return False
return True
elif type(expected) is str:
if type(obtained) is not str:
Benchmark._add_error_message("Mismatch in JSON structure. Expected string.", errors, properties)
return False
if obtained != expected:
Benchmark._add_error_message("Expected " + expected + ", obtained " + obtained, errors, properties)
return False
return True
else:
Benchmark._add_error_message("Unexpected JSON content [type: " + str(type(expected)) + "].", errors, properties)
return False

def build(self, benchmarks_root_dir : str, output_root_dir : str) -> None:
self.log("===")
Expand All @@ -216,7 +186,7 @@ def build(self, benchmarks_root_dir : str, output_root_dir : str) -> None:
"--skip_fuzzing",
"--input_file", self.src_file,
"--output_dir", self.work_dir,
"--silent_build",
"--silent_mode",
"--save_mapping"
] + (["--m32"] if "m32" in self.config["args"] and self.config["args"]["m32"] is True else []),
self.fuzz_target_file,
Expand Down Expand Up @@ -263,19 +233,20 @@ def fuzz(self, benchmarks_root_dir : str, output_root_dir : str) -> bool:
output_dir
)

errors = []
try:
outcomes_pathname = os.path.join(output_dir, self.name + "_outcomes.json")
with open(outcomes_pathname, "rb") as fp:
outcomes = json.load(fp)
if self._check_outcomes(self.config, outcomes) is True:
stats_msg = self._embrace_stats_message(self._ok_stats_message(self.config, outcomes))
self.log("The outcomes are as expected => the test has PASSED. [Details: " + stats_msg + "]", "ok " + stats_msg + "\n")
if self._check_outcomes(outcomes, self.config["results"], errors) is True:
ASSUMPTION(len(errors) == 0)
self.log("The outcomes are as expected => the test has PASSED.", "ok\n")
return True
except Exception as e:
self.log("FAILURE due to an EXCEPTION: " + str(e), "EXCEPTION[" + str(e) + "]\n")
return False
stats_msg = self._embrace_stats_message(self._fail_stats_message(self.config, outcomes))
self.log("The outcomes are NOT as expected => the test has FAILED. Details: " + stats_msg, "FAILED " + stats_msg + "\n")
error_messages = "\n " + "\n ".join(errors)
self.log("The outcomes are NOT as expected => the test has FAILED. Details:" + error_messages, "FAILED " + error_messages + "\n")
return False

def clear(self, benchmarks_root_dir : str, output_root_dir : str) -> None:
Expand Down Expand Up @@ -328,7 +299,7 @@ def search_for_benchmarks(folder : str) -> list:
pass
return benchmarks

kinds = ["fast", "medium", "slow", "pending"]
kinds = ["fast", "iid_testing", "testcomp-selection-selection", "medium", "slow", "pending"]
benchmarks = []
if name == "all":
for kind in kinds:
Expand Down
20 changes: 15 additions & 5 deletions benchmarks/fast/array-1.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@
"max_stdin_bytes": 6400,
"max_exec_milliseconds": 250,
"max_exec_megabytes": 1024,
"stdin_model": "stdin_replay_bytes_then_repeat_85",
"stdin_model": "stdin_replay_bytes_then_repeat_zero",
"stdout_model": "stdout_void",
"optimizer_max_seconds": 10,
"optimizer_max_trace_length": 1000000,
"optimizer_max_stdin_bytes": 1000000
"optimizer_max_stdin_bytes": 1000000,
"m32": false
},
"results": {
"termination_type": "NORMAL",
Expand All @@ -21,8 +22,17 @@
"covered_branchings": [
2,0, 3,0
],
"num_generated_tests": 2,
"num_crashes": 0,
"num_boundary_violations": 0
"output_statistics": {
"sensitivity_analysis": {
"num_generated_tests": 1,
"num_crashes": 0,
"num_boundary_violations": 0
},
"STARTUP": {
"num_generated_tests": 1,
"num_crashes": 0,
"num_boundary_violations": 0
}
}
}
}
20 changes: 15 additions & 5 deletions benchmarks/fast/array1_pattern.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@
"max_stdin_bytes": 6400,
"max_exec_milliseconds": 250,
"max_exec_megabytes": 1024,
"stdin_model": "stdin_replay_bytes_then_repeat_85",
"stdin_model": "stdin_replay_bytes_then_repeat_zero",
"stdout_model": "stdout_void",
"optimizer_max_seconds": 10,
"optimizer_max_trace_length": 1000000,
"optimizer_max_stdin_bytes": 1000000
"optimizer_max_stdin_bytes": 1000000,
"m32": false
},
"results": {
"termination_type": "NORMAL",
Expand All @@ -22,8 +23,17 @@
1,2654435899, 1,2654435962, 3,0, 4,0,
5,0, 6,0, 7,0, 8,0
],
"num_generated_tests": 7,
"num_crashes": 0,
"num_boundary_violations": 2
"output_statistics": {
"sensitivity_analysis": {
"num_generated_tests": 4,
"num_crashes": 1,
"num_boundary_violations": 2
},
"STARTUP": {
"num_generated_tests": 1,
"num_crashes": 1,
"num_boundary_violations": 0
}
}
}
}
20 changes: 15 additions & 5 deletions benchmarks/fast/big_issue.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,28 @@
"stdout_model": "stdout_void",
"optimizer_max_seconds": 10,
"optimizer_max_trace_length": 1000000,
"optimizer_max_stdin_bytes": 1000000
"optimizer_max_stdin_bytes": 1000000,
"m32": false
},
"results": {
"termination_type": "NORMAL",
"termination_reason": "FUZZING_STRATEGY_DEPLETED",
"num_executions": 105,
"num_executions": 103,
"num_covered_branchings": 2,
"covered_branchings": [
2,0, 4,0
],
"num_generated_tests": 3,
"num_crashes": 0,
"num_boundary_violations": 0
"output_statistics": {
"sensitivity_analysis": {
"num_generated_tests": 2,
"num_crashes": 0,
"num_boundary_violations": 0
},
"STARTUP": {
"num_generated_tests": 1,
"num_crashes": 0,
"num_boundary_violations": 0
}
}
}
}
Loading