There are always models involved with
extrapolating from the actual stimulus detected by the sensor to
the estimated properties of entities. It seems to me that the most
flexible approach is to carry this out incrementally, as is done
in the SSN ontology.
In the case of, say, a thermistor associated in some way to a
crocodile, the sensor is internally measuring voltage of some
component of the device. There is a model governing how that
voltage is converted into a temperature of the sensor environment,
including uncertainty models that may depend on how many raw
readings are averaged, the sampling frequency, how quickly the
device comes into thermal equilibrium with its surroundings, and
the rate at which the temperature of the environment is changing.
If it is desired to use this data to estimate the temperature of
the crocodile, then there is another model relating the more
direct sensor-environment-temperature estimate to this property
(of a different FOI). Crocs are cold-blooded so there will be a
correlation between their internal temperature and the temperature
of their environment, but it is a more complex model.
If we further suppose the location of the thermistor is being
detected, this would be done with a different device (e.g. GPS)
that is deployed on the same platform (croc) as the thermistor, or
perhaps together with the thermistor in an integrated system with
a single deployment. GPS has its own model to convert the actual
detected stimuli (radio signals) into an estimate of sensor
location. (This model is interesting in its own right, and
constitutes the largest use of relativistic physics that I am
aware of.)
The model connecting sensor location to croc location fails if the
sensor becomes detached from the croc, or if the croc is eaten (by
another croc - a new deployment?, or by a python - exceeding
survival conditions?) Note that the relationship between sensor
temperature and sensor location is still valid if an integrated
system becomes detached from the croc, but the connection to croc
location fails.
If one then wishes to use the data from multiple croc-temp sensors
to estimate density (of a croc population or in a particular
spatial region, each a different FOI), there is another model
regarding the percent of the population which has attached
sensors. A single sensor can't provide a density measurement, so
this is not a property that is measured by any individual sensor.
The input is the entire dataset of croc locations. Scale (in the
geospatial sense) is an important factor in the model - a fine
scale gives less accurate individual estimates because of fewer
individuals in the sample, while a course scale may give accurate
but irrelevant estimates, depending on the intended use of the
value.
Another step away from the initial sensing is to combine
croc-density with other factors to estimate river-fordability. Now
the ford is the FOI, and there is a compound input consisting of
several datasets. There is a scale of relevance for the needed
croc-density measurement that might be different from croc-density
needed for, say, a bioclimatic envelope study.
If one wants to estimate the risk of a planned river-crossing,
then this future event becomes the FOI. If the risk is perceived
as too great, this event may never actually happen.
It makes sense to me that the first two steps - from voltage to
sensor environment temperature and from radio signals to sensor
location - would be associated with the "primary" data set. The
raw voltage readings from the thermistor are probably not recorded
- these readings would be converted by the sensor itself into
temperature readings, and this stream of temperature values
corresponds to the initial data set. After that point, the
stimulus is the dataset(s), and the "sensor" is the computer that
implements the process to derive estimates of other properties.
BTW, most raw datasets are subjected to a number of
transformations before publication. Some sensing devices require
calibration, so calibration spikes might be part of the raw data
stream, but don't correspond to sensing events of the FOI.
In this (SSN-based) approach, there is no need for explicit
sampling concepts in the core ontology - that aspect is contained
in the details of the specification of the "ssn:process". In
general, a more expressive language (Common Logic, SysML, or even
IKL) may be necessary to provide the process specification.
Tara
On 10/18/13 11:35 AM, Gary Berg-Cross wrote:
Josh,
But can't I have a concept like river-fordability as a feature
of interest which involves several things such as water depth
but also presence of crocs, piranhas, sharp rocks etc.
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontology-based-standards/
Subscribe: mailto:ontology-based-standards-join@xxxxxxxxxxxxxxxx
Config/Unsubscribe: http://ontolog.cim3.net/mailman/listinfo/ontology-based-standards/
Shared Files: http://ontolog.cim3.net/file/work/OntologyBasedStandards/
Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologyBasedStandards
|
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontology-based-standards/
Subscribe: mailto:ontology-based-standards-join@xxxxxxxxxxxxxxxx
Config/Unsubscribe:
http://ontolog.cim3.net/mailman/listinfo/ontology-based-standards/
Shared Files: http://ontolog.cim3.net/file/work/OntologyBasedStandards/
Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologyBasedStandards (01)
|