Package

cogx.compiler.parser

syntaxtree

Permalink

package syntaxtree

Visibility
  1. Public
  2. All

Type Members

  1. class Actuator extends ScalarField with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output scalar field value is generated on every cycle of the simulation. The user function update is called when the output is ready, so that the user may use that information elsewhere.

    Implicit ClassTag included to disambiguate the signature from another using Function1[Iterator[Float],Unit]

    This class needs some clean-up, since Actuators are created through both the 'new' keyword and factory object apply() methods. The apply methods are better on the one hand for isolating user code from changes in the platform implementation. However, the recommended approach for saving/restoring Actuators has the user create a subclass of Actuator with restoreParameters and restoringClass overridden.

  2. class CogFloat extends AnyRef

    Permalink

    A wrapper for Floats that allows commutative operations between fields and floats.

  3. trait CogSymbolicOperators extends AnyRef

    Permalink

    Cog symbolic operators implemented using GPUOperators.

  4. class ColorActuator extends ColorField with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output color field value is generated on every cycle of the simulation. The user function update is called when the output is ready, so that the user may use that information elsewhere.

  5. class ColorField extends Field with CompilerError with SemanticError

    Permalink

    A color image.

  6. class ColorSensor extends ColorField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the pipelined version of ColorSensors.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator of the values of the new input in row-major order.

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

  7. class ComplexField extends Field with CompilerError with SemanticError

    Permalink

    A multidimensional array of complex numbers.

  8. class ComplexVectorField extends Field with CompilerError with SemanticError

    Permalink

    A multidimensional array of complex vectors.

  9. abstract class Field extends Hyperedge[Operation] with RecurrenceTrait with SemanticError with CompilerError with ImplicitConversions with FieldName with FieldParameters with CogOperatorAPI

    Permalink

    Base class for all Fields; defines the operators that can be applied to Fields.

    Base class for all Fields; defines the operators that can be applied to Fields.

    Fields

    A field is a multidimensional array of tensors, where tensors are defined to be multidimensional arrays of numbers. The dimensionality of the field may be 0, 1, 2 or 3. The actual size of the field dimensions are called the "field shape." To make programming easier, the field shape is described using the terms layers, rows and columns. 3D fields uses all three values. 2D fields use only rows and columns and have layers set to 1 for convenience. 1D fields use only columns and have layers and rows set to 1 for convenience. 0D fields have only a single tensor and have no need for layers, rows or columns, but for convenience these values are set to 1.

    Tensors

    The dimensionality of a tensor is called its "order" which may be 0 (for scalars), 1 (vectors), or 2 (matrices). Tensors also have a shape which uses similar naming as for field shapes. For example, a matrix has rows and columns. All tensors within a given field have exactly the same shape.

    Operators

    Operators take one or more fields (which can be considered as immutable objects) and produce a result field. Each operator has a set of rules defining the legal combinations of fields it accepts as inputs, and how those inputs are combined to produce the output. Fortunately most operators use only one of a small set of different rules; the most common rules are now described:

    Algebraic binary operator rules

    Binary operators take two fields as inputs. Generally if one of them is a complex field, the other will be implicitly converted to a complex form (with zero imaginary components) before proceeding.

    The two inputs Fields are algebraically compatible if they satisfy one of the following four conditions (which also define the nature of their result):

    1. They have exactly the same field shape and tensor shape. In this case, corresponding elements of the two fields are combined to produce the result: a field with the same field shape and tensor shape as the two input fields.

    2. They have exactly the same field shape, but one of them is a scalar field and the other is a (non-scalar) tensor field. In this case the scalar at each location in the scalar field is combined with the tensor in the corresponding location in the tensor field. The result is a tensor field with the same field shape and tensor shape as the input tensor field.

    3. One of them is a 0-dimensional scalar field. In this case the single scalar of the 0D scalar field is combined with each element of every tensor in tensor field. The result is a tensor field with the same field shape and tensor shape as the input tensor field.

    4. One of them is a 0-dimensional tensor field (non-scalar). In this case, the tensor shape of the 0-dimensional field must exactly match the tensor shape of the other field. The tensor from the 0-dimensional field is combined element-wise with each of the tensors in the other field to produce the result, which has the same field shape and tensor shape of the larger input field.

    Algebraic unary operator rules

    Operators which take only one field as input (and an optional numeric constant) produce a result with the same field shape and tensor shape as the input field. If the input field is complex, the optional numeric constant is converted to complex (with zero imaginary part) before proceeding with the operation.

    Boolean result rules

    Operators which produce boolean results, such as the comparison operators, use 1.0f to represent true and 0.0f to represent false.

  10. trait FieldName extends AnyRef

    Permalink

    Trait that centralizes the policies for naming fields.

    Trait that centralizes the policies for naming fields.

    Field names are Scala-like path names with '.' separated components. The last component is called the simple name, while the components leading up to the simple name comprise the path name prefix.

    Naming is sticky. Once a path name prefix has been declared, it cannot be changed. Similarly, once a simple name has been declared, it cannot be changed.

  11. class MatrixField extends Field with CompilerError with SemanticError

    Permalink

    A multidimensional array of matrices.

  12. trait RestoreHooks extends AnyRef

    Permalink

    Augmenting methods for those sensor and actuator classes wishing to be saved to a file and restored.

  13. class ScalarField extends Field with CompilerError with SemanticError

    Permalink

    A multidimensional array of scalars.

  14. class Sensor extends ScalarField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the pipelined version.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator over the values of the new input in row-major order. Alternatively, the nextValue function can supply the full dataset as an Array[Float] (for 0D or 1D fields), Array[Array][Float]] (for 2D fields), etc.

    The use of implicits here is primarily to avoid duplicate constructor signatures. Further work can be put into taking the primary constructor private, with only nextValue functions of certain forms allowed (not the generic () => Option[_]).

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

  15. class UnpipelinedActuator extends Operation with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator.

    An output from a Cog computation, called an actuator.

    An output scalar field value is generated on every cycle of the simulation. The user function newOutput is called when the output is ready, so that the user may use that information elsewhere.

    This class needs some clean-up, since Actuators are created through both the 'new' keyword and factory object apply() methods. The apply methods are better on the one hand for isolating user code from changes in the platform implementation. However, the recommended approach for saving/restoring Actuators has the user create a subclass of Actuator with restoreParameters and restoringClass overridden.

  16. class UnpipelinedColorActuator extends Operation with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator.

    An output from a Cog computation, called an actuator.

    An output color field value is generated on every cycle of the simulation. The user function newOutput is called when the output is ready, so that the user may use that information elsewhere.

  17. class UnpipelinedColorSensor extends ColorField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the unpipelined version of ColorSensors.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator of the values of the new input in row-major order.

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

  18. class UnpipelinedSensor extends ScalarField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the unpipelined version.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator over the values of the new input in row-major order. Alternatively, the nextValue function can supply the full dataset as an Array[Float] (for 0D or 1D fields), Array[Array][Float]] (for 2D fields), etc.

    The use of implicits here is primarily to avoid duplicate constructor signatures. Further work can be put into taking the primary constructor private, with only nextValue functions of certain forms allowed (not the generic () => _).

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

  19. class UnpipelinedVectorActuator extends Operation with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator.

    An output from a Cog computation, called an actuator.

    An output scalar field value is generated on every cycle of the simulation. The user function newOutput is called when the output is ready, so that the user may use that information elsewhere.

  20. class UnpipelinedVectorSensor extends VectorField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the unpipelined vector version.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator of the values of the new input in row-major order.

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

  21. class VectorActuator extends VectorField with SemanticError with RestoreHooks

    Permalink

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output from a Cog computation, called an actuator, here pipelined to overlap CPU and GPU work.

    An output vector field value is generated on every cycle of the simulation. The user function update is called when the output is ready, so that the user may use that information elsewhere.

  22. class VectorField extends Field with CompilerError with SemanticError

    Permalink

    A multidimensional array of vectors.

  23. class VectorSensor extends VectorField with SemanticError with RestoreHooks

    Permalink

    Inputs to a Cog computation are called sensors.

    Inputs to a Cog computation are called sensors. This implements the pipelined version.

    Sensors can be either pipelined or unpipelined. Pipelined sensors use the CPU to produce an input to the GPU while the GPU is working on the previous input. Thus, there's effectively a pipeline stage between the CPU and the GPU and both do their work in parallel. Unpipelined sensors have no such pipeline stage, so the CPU must provide its input first before the GPU processes that input further, i.e. the CPU and GPU do their work in series.

    When an unpipelined sensor's nextValue method is called, it must always return an iterator over the next input's data. However, a pipelined sensor has the option of returning None, if no new input is available. In that case the pipeline register that the sensor is feeding is not clocked and the same data is presented to the GPU. This can be used to decouple a slow sensor from a fast-running simulation, making the sensor appear effectively 'asynchronous'.

    Both sensor types can accept a resetHook method, which can be used for example to go back to frame-0 of a movie that's played out from a file, or to start over from the first image of a training set. If a sensor supplies no nextValue iterator upon reset, an all-0 field will be supplied.

    Finally, sensors can be throttled back to a specified simulation rate by the 'desiredFramesPerSecond parameter. This ensures that a movie is played out at the appropriate speed, for example.

    Both types of sensors supply their next input by nextValue function which (optionally) returns an iterator of the values of the new input in row-major order.

    NOTE: if the user wishes to optimize input using, for example, multiple threads or double-buffering, that must be done in the implementation of the function nextValue.

Value Members

  1. object Actuator extends SemanticError

    Permalink

    Factory for creating actuators that write fields to Scala arrays.

  2. object ColorActuator extends SemanticError

    Permalink

    Factory for creating actuators that write fields to Scala arrays of Pixels.

  3. object ColorField extends CompilerError

    Permalink

    Function for creating constant/recurrent color fields.

  4. object ColorSensor

    Permalink
  5. object ComplexField extends CompilerError

    Permalink

    Functions for creating constant/recurrent complex fields.

  6. object ComplexVectorField extends CompilerError

    Permalink

    Functions for creating constant/recurrent complex vector fields.

  7. object Field extends CogFunctionAPI with SemanticError

    Permalink

    Factory for creating constant fields.

    Factory for creating constant fields.

    This object extends Intrinsics to give the ImplicitConversions trait a convenient place to access the methods defined there, as in: Field.vectorField(in1, in2)

  8. object MatrixField extends CompilerError

    Permalink

    Functions for creating constant/recurrent matrix fields.

  9. object Operation

    Permalink
  10. object ScalarField extends CompilerError

    Permalink

    Functions for creating constant/recurrent scalar fields.

  11. object Sensor

    Permalink
  12. object UnpipelinedActuator extends SemanticError

    Permalink

    Factory for creating actuators that write fields to Scala arrays.

  13. object UnpipelinedColorActuator extends SemanticError

    Permalink

    Factory for creating actuators that write fields to Scala arrays.

  14. object VectorActuator extends SemanticError

    Permalink

    Factory for creating actuators that write fields to Scala arrays.

  15. object VectorField extends CompilerError

    Permalink

    Functions for creating constant/recurrent vector fields.

  16. object VectorSensor

    Permalink

Ungrouped