diff --git a/report/report.aux b/report/report.aux index 8811a72..20725e7 100644 --- a/report/report.aux +++ b/report/report.aux @@ -28,9 +28,9 @@ \@writefile{toc}{\contentsline {subsection}{\numberline {\mbox {II-C}}Boundary Value Testing}{1}{subsection.2.3}\protected@file@percent } \@writefile{lot}{\contentsline {table}{\numberline {IV}{\ignorespaces Boundary values for input parameters.\relax }}{1}{table.caption.4}\protected@file@percent } \newlabel{boundaries}{{IV}{1}{Boundary values for input parameters.\relax }{table.caption.4}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {\mbox {II-C}1}Test Cases}{1}{subsubsection.2.3.1}\protected@file@percent } \@writefile{lot}{\contentsline {table}{\numberline {V}{\ignorespaces Test cases generated for boundary value testing\relax }}{2}{table.caption.5}\protected@file@percent } \newlabel{tab:test_cases}{{V}{2}{Test cases generated for boundary value testing\relax }{table.caption.5}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {\mbox {II-C}1}Test Cases}{2}{subsubsection.2.3.1}\protected@file@percent } \@writefile{toc}{\contentsline {section}{\numberline {III}Experiment 2}{2}{section.3}\protected@file@percent } \@writefile{toc}{\contentsline {subsection}{\numberline {\mbox {III-A}}Decision Table Testing}{2}{subsection.3.1}\protected@file@percent } \@writefile{lot}{\contentsline {table}{\numberline {VI}{\ignorespaces Test cases generated for for decision table testing\relax }}{2}{table.caption.6}\protected@file@percent } diff --git a/report/report.pdf b/report/report.pdf index a12ecde..1501292 100644 Binary files a/report/report.pdf and b/report/report.pdf differ diff --git a/report/report.tex b/report/report.tex index adbbc48..36a2272 100644 --- a/report/report.tex +++ b/report/report.tex @@ -25,7 +25,7 @@ % Introduction Section \section{Introduction} -In this report we implemented and tested two given programs. Decision table testing, equivalence testing and boundary testing done for each and compared a) different testing techniques for same application b) same testing technique for different programs. +In this report we implemented and tested two given programs. Decision table testing, equivalence testing and boundary testing done for each and compared both programs with each other in respect to 3 test techniques. % Experiment 1 Section \section{Experiment 1} @@ -267,6 +267,31 @@ Boundary Values Considered: \section{Conclusion} +In Project 2 (Access Control scenario), the decision conditions were related to employee roles, special authorization, and time constraints. In contrast, Project 1 focuses on numerical thresholds (Purchase Amount, Number of Items) and cost calculations. While Project 2 tested boolean logic and access permissions, Project 1 tests arithmetic computations and fee structures. + + +Test Complexity: +\begin{itemize} + \item Project 2: More logical/boolean conditions. + \item Project 1: More arithmetic and boundary-value checks. +\end{itemize} + + +Number of Test Cases: +\begin{itemize} + \item Project 2: Focused on covering all logic paths (e.g., 8 decision table tests). + \item Project 1: Has a straightforward table with distinct numeric thresholds, potentially leading to more test cases because of multiple numeric partitions and delivery speed options (12 scenarios as outlined above). +\end{itemize} + + +Coverage Strategy: +\begin{itemize} + \item Project 2: Ensured every logical combination was tested. + \item Project 1: Ensures each numerical range and delivery option is covered, testing all formula variations. +\end{itemize} + + + \onecolumn \section*{Additional Resources}