An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
A wrapper for Floats that allows commutative operations between fields and floats.
Cog symbolic operators implemented using GPUOperators.
An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
An output color field value is generated on every cycle of the
simulation. The user function update
is called when the output is
ready, so that the user may use that information elsewhere.
A color image.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the pipelined version of ColorSensors.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator of the values of the new input in row-major order.
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
A multidimensional array of complex numbers.
A multidimensional array of complex vectors.
Base class for all Fields; defines the operators that can be applied to Fields.
Base class for all Fields; defines the operators that can be applied to Fields.
Fields
A field is a multidimensional array of tensors, where tensors are defined
to be multidimensional arrays of numbers. The dimensionality of the field
may be 0, 1, 2 or 3. The actual size of the field dimensions are called
the "field shape." To make programming easier, the field shape is described
using the terms layers
, rows
and columns
. 3D fields uses all three
values. 2D fields use only rows
and columns
and have layers
set to 1
for convenience. 1D fields use only columns
and have layers
and
rows
set to 1 for convenience. 0D fields have only a single tensor and
have no need for layers
, rows
or columns
, but for convenience these
values are set to 1.
Tensors
The dimensionality of a tensor is called its "order" which may be 0
(for scalars), 1 (vectors), or 2 (matrices). Tensors also have a shape which
uses similar naming as for field shapes. For example, a matrix has rows
and columns
. All tensors within a given field have exactly the same
shape.
Operators
Operators take one or more fields (which can be considered as immutable objects) and produce a result field. Each operator has a set of rules defining the legal combinations of fields it accepts as inputs, and how those inputs are combined to produce the output. Fortunately most operators use only one of a small set of different rules; the most common rules are now described:
Algebraic binary operator rules
Binary operators take two fields as inputs. Generally if one of them is a complex field, the other will be implicitly converted to a complex form (with zero imaginary components) before proceeding.
The two inputs Fields are algebraically compatible if they satisfy one of the following four conditions (which also define the nature of their result):
1. They have exactly the same field shape and tensor shape. In this case, corresponding elements of the two fields are combined to produce the result: a field with the same field shape and tensor shape as the two input fields.
2. They have exactly the same field shape, but one of them is a scalar field and the other is a (non-scalar) tensor field. In this case the scalar at each location in the scalar field is combined with the tensor in the corresponding location in the tensor field. The result is a tensor field with the same field shape and tensor shape as the input tensor field.
3. One of them is a 0-dimensional scalar field. In this case the single scalar of the 0D scalar field is combined with each element of every tensor in tensor field. The result is a tensor field with the same field shape and tensor shape as the input tensor field.
4. One of them is a 0-dimensional tensor field (non-scalar). In this case, the tensor shape of the 0-dimensional field must exactly match the tensor shape of the other field. The tensor from the 0-dimensional field is combined element-wise with each of the tensors in the other field to produce the result, which has the same field shape and tensor shape of the larger input field.
Algebraic unary operator rules
Operators which take only one field as input (and an optional numeric constant) produce a result with the same field shape and tensor shape as the input field. If the input field is complex, the optional numeric constant is converted to complex (with zero imaginary part) before proceeding with the operation.
Boolean result rules
Operators which produce boolean results, such as the comparison operators, use 1.0f to represent true and 0.0f to represent false.
Trait that centralizes the policies for naming fields.
Trait that centralizes the policies for naming fields.
Field names are Scala-like path names with '.' separated components. The last component is called the simple name, while the components leading up to the simple name comprise the path name prefix.
Naming is sticky. Once a path name prefix has been declared, it cannot be changed. Similarly, once a simple name has been declared, it cannot be changed.
A multidimensional array of matrices.
Augmenting methods for those sensor and actuator classes wishing to be saved to a file and restored.
A multidimensional array of scalars.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the pipelined version.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator over the values of the new input in row-major order. Alternatively, the nextValue
function
can supply the full dataset as an Array[Float] (for 0D or 1D fields), Array[Array][Float]] (for 2D
fields), etc.
The use of implicits here is primarily to avoid duplicate constructor signatures. Further work can be
put into taking the primary constructor private, with only nextValue
functions of certain forms allowed (not
the generic () => Option[_]).
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
An output from a Cog computation, called an actuator.
An output from a Cog computation, called an actuator.
An output scalar field value is generated on every cycle of the
simulation. The user function newOutput
is called when the output is
ready, so that the user may use that information elsewhere.
This class needs some clean-up, since Actuators are created through both the 'new' keyword and factory object apply() methods. The apply methods are better on the one hand for isolating user code from changes in the platform implementation. However, the recommended approach for saving/restoring Actuators has the user create a subclass of Actuator with restoreParameters and restoringClass overridden.
An output from a Cog computation, called an actuator.
An output from a Cog computation, called an actuator.
An output color field value is generated on every cycle of the
simulation. The user function newOutput
is called when the output is
ready, so that the user may use that information elsewhere.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the unpipelined version of ColorSensors.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator of the values of the new input in row-major order.
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the unpipelined version.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator over the values of the new input in row-major order. Alternatively, the nextValue
function
can supply the full dataset as an Array[Float] (for 0D or 1D fields), Array[Array][Float]] (for 2D
fields), etc.
The use of implicits here is primarily to avoid duplicate constructor signatures. Further work can be
put into taking the primary constructor private, with only nextValue
functions of certain forms allowed (not
the generic () => _).
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
An output from a Cog computation, called an actuator.
An output from a Cog computation, called an actuator.
An output scalar field value is generated on every cycle of the
simulation. The user function newOutput
is called when the output is
ready, so that the user may use that information elsewhere.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the unpipelined vector version.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator of the values of the new input in row-major order.
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
An output vector field value is generated on every cycle of the
simulation. The user function update
is called when the output is
ready, so that the user may use that information elsewhere.
A multidimensional array of vectors.
Inputs to a Cog computation are called sensors.
Inputs to a Cog computation are called sensors. This implements the pipelined version.
Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.
When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.
Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.
Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond
parameter. This ensures that a movie is played out at the appropriate speed, for example.
Both types of sensors supply their next input by nextValue
function which (optionally) returns an
iterator of the values of the new input in row-major order.
NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering,
that must be done in the implementation of the function nextValue
.
Factory for creating actuators that write fields to Scala arrays.
Factory for creating actuators that write fields to Scala arrays of Pixels.
Function for creating constant/recurrent color fields.
Functions for creating constant/recurrent complex fields.
Functions for creating constant/recurrent complex vector fields.
Factory for creating constant fields.
Factory for creating constant fields.
This object extends Intrinsics to give the ImplicitConversions trait a convenient place to access the methods defined there, as in: Field.vectorField(in1, in2)
Functions for creating constant/recurrent matrix fields.
Functions for creating constant/recurrent scalar fields.
Factory for creating actuators that write fields to Scala arrays.
Factory for creating actuators that write fields to Scala arrays.
Factory for creating actuators that write fields to Scala arrays.
Functions for creating constant/recurrent vector fields.
An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.
An output scalar field value is generated on every cycle of the simulation. The user function
update
is called when the output is ready, so that the user may use that information elsewhere.Implicit ClassTag included to disambiguate the signature from another using Function1[Iterator[Float],Unit]
This class needs some clean-up, since Actuators are created through both the 'new' keyword and factory object apply() methods. The apply methods are better on the one hand for isolating user code from changes in the platform implementation. However, the recommended approach for saving/restoring Actuators has the user create a subclass of Actuator with restoreParameters and restoringClass overridden.