summaryrefslogtreecommitdiff
path: root/manual/CHAPTER_Basics.tex
diff options
context:
space:
mode:
authorAnthony J. Bentley <anthony@cathet.us>2014-04-11 02:42:59 -0600
committerAnthony J. Bentley <anthony@cathet.us>2014-04-11 02:42:59 -0600
commit9c1e578afe6af40be4600c20c883fa016fc7fa26 (patch)
tree133f4fd1b7f4956c53f79fd731c5ebf07c598262 /manual/CHAPTER_Basics.tex
parent6ef2224331e7246d1e107c9e533a7cadce786107 (diff)
Typos and grammar fixes through chapter 2.
Diffstat (limited to 'manual/CHAPTER_Basics.tex')
-rw-r--r--manual/CHAPTER_Basics.tex30
1 files changed, 15 insertions, 15 deletions
diff --git a/manual/CHAPTER_Basics.tex b/manual/CHAPTER_Basics.tex
index 9cc4720e..c0eda0e8 100644
--- a/manual/CHAPTER_Basics.tex
+++ b/manual/CHAPTER_Basics.tex
@@ -56,7 +56,7 @@ and how they relate to different kinds of synthesis.
Regardless of the way a lower level representation of a circuit is
obtained (synthesis or manual design), the lower level representation is usually
verified by comparing simulation results of the lower level and the higher level
-representation \footnote{In the last years formal equivalence
+representation \footnote{In recent years formal equivalence
checking also became an important verification method for validating RTL and
lower abstraction representation of the design.}.
Therefore even if no synthesis is used, there must still be a simulatable
@@ -71,7 +71,7 @@ be considered a ``High-Level Language'' today.
\subsection{System Level}
The System Level abstraction of a system only looks at its biggest building
-blocks like CPUs and computing cores. On this level the circuit is usually described
+blocks like CPUs and computing cores. At this level the circuit is usually described
using traditional programming languages like C/C++ or Matlab. Sometimes special
software libraries are used that are aimed at simulation circuits on the system
level, such as SystemC.
@@ -177,9 +177,9 @@ synthesis operations.
\subsection{Logical Gate Level}
-On the logical gate level the design is represented by a netlist that uses only
+At the logical gate level the design is represented by a netlist that uses only
cells from a small number of single-bit cells, such as basic logic gates (AND,
-OR, NOT, XOR, etc.) and Registers (usually D-Type Flip-flops).
+OR, NOT, XOR, etc.) and registers (usually D-Type Flip-flops).
A number of netlist formats exists that can be used on this level, e.g.~the Electronic Design
Interchange Format (EDIF), but for ease of simulation often a HDL netlist is used. The latter
@@ -191,8 +191,8 @@ within the gate level netlist and second the optimal (or at least good) mapping
gate netlist to an equivalent netlist of physically available gate types.
The simplest approach to logic synthesis is {\it two-level logic synthesis}, where a logic function
-is converted into a sum-of-products representation, e.g.~using a karnaugh map.
-This is a simple approach, but has exponential worst-case effort and can not make efficient use of
+is converted into a sum-of-products representation, e.g.~using a Karnaugh map.
+This is a simple approach, but has exponential worst-case effort and cannot make efficient use of
physical gates other than AND/NAND-, OR/NOR- and NOT-Gates.
Therefore modern logic synthesis tools utilize much more complicated {\it multi-level logic
@@ -287,7 +287,7 @@ applications to be used with a richer set of Verilog features.
\subsection{Behavioural Modelling}
Code that utilizes the Verilog {\tt always} statement is using {\it Behavioural
-Modelling}. In behavioural, modelling a circuit is described by means of imperative
+Modelling}. In behavioural modelling, a circuit is described by means of imperative
program code that is executed on certain events, namely any change, a rising
edge, or a falling edge of a signal. This is a very flexible construct during
simulation but is only synthesizable when one of the following is modelled:
@@ -457,7 +457,7 @@ Correctness is crucial. In some areas this is obvious (such as
correct synthesis of basic behavioural models). But it is also crucial for the
areas that concern minor details of the standard, such as the exact rules
for handling signed expressions, even when the HDL code does not target
-different synthesis tools. This is because (different to software source code that
+different synthesis tools. This is because (unlike software source code that
is only processed by compilers), in most design flows HDL code is not only
processed by the synthesis tool but also by one or more simulators and sometimes
even a formal verification tool. It is key for this verification process
@@ -467,9 +467,9 @@ that all these tools use the same interpretation for the HDL code.
Generally it is hard to give a one-dimensional description of how well a synthesis tool
optimizes the design. First of all because not all optimizations are applicable to all
-designs and all synthesis tasks. Some optimizations work (best) on a coarse grain level
-(with complex cells such as adders or multipliers) and others work (best) on a fine
-grain level (single bit gates). Some optimizations target area and others target speed.
+designs and all synthesis tasks. Some optimizations work (best) on a coarse-grained level
+(with complex cells such as adders or multipliers) and others work (best) on a fine-grained
+level (single bit gates). Some optimizations target area and others target speed.
Some work well on large designs while others don't scale well and can only be applied
to small designs.
@@ -610,7 +610,7 @@ The lexer is usually generated by a lexer generator (e.g.~{\tt flex} \citeweblin
description file that is using regular expressions to specify the text pattern that should match
the individual tokens.
-The lexer is also responsible for skipping ignored characters (such as white spaces outside string
+The lexer is also responsible for skipping ignored characters (such as whitespace outside string
constants and comments in the case of Verilog) and converting the original text snippet to a token
value.
@@ -714,11 +714,11 @@ be connected in two different ways: through {\it Single-Pass Pipelining} and by
Traditionally a parser and lexer are connected using the pipelined approach: The lexer provides a function that
is called by the parser. This function reads data from the input until a complete lexical token has been read. Then
this token is returned to the parser. So the lexer does not first generate a complete list of lexical tokens
-and then passes it to the parser. Instead they are running concurrently and the parser can consume tokens as
+and then pass it to the parser. Instead they run concurrently and the parser can consume tokens as
the lexer produces them.
-The single-pass pipelining approach has the advantage of lower memory footprint (at no time the complete design
-must be kept in memory) but has the disadvantage of tighter coupling between the interacting components.
+The single-pass pipelining approach has the advantage of lower memory footprint (at no time must the complete design
+be kept in memory) but has the disadvantage of tighter coupling between the interacting components.
Therefore single-pass pipelining should only be used when the lower memory footprint is required or the
components are also conceptually tightly coupled. The latter certainly is the case for a parser and its lexer.