ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] How de facto standards are created

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: John F Sowa <sowa@xxxxxxxxxxx>
Date: Tue, 18 Jun 2013 22:23:18 -0400
Message-id: <51C11616.40407@xxxxxxxxxxx>
Kingsley, David, and Hans,    (01)

JFS
>> But the little semantics that most OWL users actually use does not
>> require anything more than Aristotle.    (02)

KI
> But how so you put Aristotle into a Web resource such that user agents
> can then make sense of what they encounter?    (03)

You can use Aristotle's subset of logic in exactly the same way that you
would use OWL or any other DL.  It is the most widely used subset of any
of them.  But it is very easy to learn, and it can be written in just
four simple sentence patterns in English or any other language.    (04)

JFS
>> it's essential for the SW to dump that totally irrelevant
>> and hopelessly misleading buzzword "decidability".    (05)

KI
> Many of us moved beyond that ages ago.    (06)

That's true because it's impossible to implement any practical
application without using Turing-complete languages, which can
express undecidable programs.    (07)

Tim B-L recognized that point in his DAML proposal in 2000, but the
"decidability thought police" killed many useful languages, such as
SWRL, RuleML, and others, because they were undecidable.    (08)

As a result, it's impossible to implement a complete application with
just the SW-approved tools.  Every developer has to supplement them
with undecidable languages that are not on the approved list.    (09)

DE
>> Is there any practical value in 50 billion triples?
>>
>> It's been my experience that the more stuff one combines from more sources,
>> the noise level just goes through the roof.    (010)

KI
> So you need entity disambiguation as a feature of tools that interface
> with the LOD cloud.    (011)

HP
> The issue is not so much “noise” – although there is certainly some of that...
> as it is the multiplicity of contexts/perspectives implicit in the data and
> the varying/overlapping scope of said contexts and associated perspectives
> that make this a difficult problem.    (012)

I agree.    (013)

When you get a million data items that were produced by more than one
person using more than one computer program, you can be certain that
you're comparing apples, oranges, walnuts, and sauerkraut.    (014)

When you get to 50 billion, they will break down into many different
subgroups, each of which might be useful in some context from some
perspective.  But there is no single reasoning method that could
meaningfully deal with all of them.    (015)

John    (016)

_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (017)

<Prev in Thread] Current Thread [Next in Thread>