Dale and Kingsley, (01)
I'd like to focus on two of the points that Dale made: (02)
DFK
> Is it at all helpful to more precisely identify what is contributing
> to the bigness in big data? (03)
For different applications, different features may have a stronger
influence on the computational complexity. In calculating the time,
look for whatever size N is dominating the computation. (04)
For any amount N that is considered large, the algorithms that process
the data should take time that increases in proportion to N -- or at
least no worse than (N log N). (05)
DFK
> Computing power will continue to increase making the number
> of instances a trivial matter (06)
No. As computers get bigger and faster, they always find or create
more data for other computers to process. So the typical data size N
will grow as fast as the size, speed, and number of computers. (07)
And for any size N, N^2 will always be much, much, much bigger than N. (08)
KI
> According the ISO task force trying to make sense of big data
> a draft definition is:
>
> Big Data: a data set(s) with characteristics that for a particular
> problem domain at a given point in time cannot be efficiently processed
> using current/existing/established/traditional technologies and
> techniques in order to extract value (09)
This definition is not bad, but it implies that the word 'big' is
a moving target. A criterion based on computational complexity
scales with the technology. (010)
John (011)
_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (012)
|