JK> How does a cyclic definition introduce incompleteness? (01)
When sentences contain cyclic sentences, they become inconsistent,
think of the halting problem (undecidability) with turing machines. If
we were to introduce recursion to our interpretations, it would be
limited to partial recursion. The task, then, would be to create an
interpretation of the sentence which is acyclic, and our sentence
consistent. The interpretation would be decidable, and our recursive
function total. (02)
JK> I seem to recall that if you are not explicitly using time as a variable in
> a logical system, then you have to deal with "timeless" things that are always
> true rather than being concerned with what used to exist or what could exist.
I had this idea, based on Modal Logic, and saying that some set
of properties define a concept now, but not at a later time, could be
a way of quantifying differences between concepts in different
ontologies. Think of a Wikipedia entry where the definition of a term
may change with popular opinion or when more information becomes
On the 3D vs 4D discussion, you can look at this thread:
BG>> The main one is "classification - the act of assigning something a class". (06)
JK> Okay, so the context is some kind of group of true statements which
> are used to make sense of what happens when you do the activity of
> classifying something ("assigning something a class"). I guess this
> is indistinguishable from a microtheory, but someone who knows more
> about will have to weigh in to see if I am guess right. (07)
I'm not sure. Don't know enough about microtheory. Can you elaborate? (08)
JK> By the way, I assume you made a typo when you said you would identify
> a Cat as an instance named Fido. and really meant that you would classify
> an instance named Fido as an instance of the class named Cat. (09)
> Thinking of this, I guess when I said "an instance named Fido", I must
> be thinking of something like "an instance of #$Thing named Fido"
> where #$Thing is a generalization of Cat. (011)
BG>> The secondary one is something that would be achieved through an
>> empirical method. What I mean here is that I don't know in how many
>> ways I can classify a Cat. If I have only experienced gray Cats, I
>> wouldn't even think of classifying them by Colour. If at some point I
>> find a cat which is Black, I have to add Colour to my classification. (013)
JK> I guess I'm thinking of databases where a "colorOf" column would exist
> for each instance. But what if you add Colour as part of your classification
> but you don't know what color a particular instance might happen to be? (014)
In statistical inference, depending on the algorithm you use to
classify, there are different ways of handling unknown values.
NaiveBayes handles them well. Actually, missing values are a reality
of data mining, so most techniques handle it well. In Machine
Learning, generalization allows you to learn a set of data to build a
model, and apply that model to different variations of attributes,
missing or supplied. (015)
Mind you that as Chris Welty pointed out to me, this isn't logic, it's
data mining, when I was specifically talking about information gains
and entropy. Some classification algorithms lend themselves better to
being mapped to logic, as John F. Sowa pointed out. (016)
> BG> I'm wondering whether there is a point where statistical
> > analysis can take over where we simply don't know enough
> > about a topic to infer information using logic alone. (017)
JFS> There is no difference in principle... Logic can be used to
> define what is an A, what is a B, and what is a C.
> Statistics counts how many As, how many Bs, and how many Cs.
> As for the C4.5 algorithm, it is more closely related to
> logic than it is to statistics. It's a kind of learning
> algorithm, but what it learns is a *decision tree* that
> can be expressed as a very large nest of if-then-else
> statements. That tree can be mapped to a program in any
> language that supports if-then-else statements, such as
> C or Java. It can also be mapped to first-order logic.
JK> I guess a Fuzzy Logic value of a probability of Black colour makes sense.
> something like if you know apriori that 25% of all cats are black then
> the value of colorOf when Unknown has a 25% blackness. If all the other
> colors have less than 25% probability of being true then the system
> could do some kind of default logic where it assumes that the cat is
> a BlackCat until it has information to the contrary. This would necessitate
> that the system was a non-monotonic reasoner I guess, which I have been
> told is actually quite hard to implement.
> Or am I confusing the probability of something being true with the
> of certainty that something is true? (019)
Re: certainty that something is true?
Fuzzy logic specifically works with a (classification) membership
function, which includes everything, with varying degrees. So a green
lizard is 0% a white cat. this becomes very problematic to logical
inference because those axioms were meant to give you truth values,
either true or false. (020)
Re: probability of something being true:
This may relate more to association (vs classification) rules. The
result is a prediction based on a limited number of attributes that
essentially says, given values X, I predict Y..Z for the remaining
attributes, and therefor assume class A. This may lead you to try out
the different combination, which is often infeasible. But this does
give you a way to compute which attributes are the most important,
which are most significant for a successful classification. (021)
JK> Does assigning a degree of truth to a class mean anything? How can a class
> be true or false? I thought only sentences could be true and false. Or are
> you using some shorthand (metonymy?) to mean that the "truth" of a class is
> the same as the truth of some sentence that holds for every instance of the
> class? (022)
I don't think it's shorthand, it's strictly a degree of how closely an
object resembles a known class definition.
The class "Black Cat" is false for an instance of White Cat, but their
similarities at a higher level, Cat, are close enough for the
relationship to be somewhat true. So Cat is the most significant one,
but ColorOf is a level of granularity. (023)
JK> I can imagine a pool of sentences that are all true of every instance, a
> pool of sentences that are thought to be true of every instance because no
> counter example has yet been found, as well as two corresponding pools of
> sentences that are all false or are thought to be false. These would
> correspond to "sufficient" true sentences, (024)
I don't know if this would work because then every true sentence would
be a sufficient condition for any unknown conclusion. If I understand
you right. (025)
JK> I guess for some kind of modal
> logic instead of a First Order Predicate Logic (FOPL). These sentences
> could be tied together to make some kind of grand single sentence with logical
> AND between each one.
> I don't really know what you would do for a more fuzzy kind of logic.
> Would you have pools of sentences that are only thought to be true if some
> kind of guard condition is true? (026)
That's essentially what happens in fuzzy logic. You assume every
object belongs to every class, but with different degrees. The guard
condition might be degree of 0, or some other threshold. For crisps
(vs fuzzy) values it's 0, so the possible values are 0 or 1. See
further down for some elaboration on this. (027)
JK >maybe like an IF-THEN statement or a WHEN
> statement? But this seems to be just a way to keep a single class when the
> natural thing would be just to make sub-classes where the guard condition
> is known to be true, and then put those dependent pools in as the known-true
> and the thought-to-be-true pools of the sub-class. (028)
Yes, but you perhaps didn't know you had to make a sub-class, for a
white cat lets say, if you've never seen one. (029)
JK> I guess if you had a pool of sentences that could be combined with a logical
> OR between each one, that would be something like a "necessarily" true
> condition for the class using modal logic. There would be similary
> necessarily thought-to-be-true, "necessarily false" and
> "necessarily-thought-to-be-false" pools as well. (030)
Modal logic might give you axioms to reason over probabilistically
created propositions, with the loss of precision fuzzy logic would
give you. Or is there a way to assign degree to propositions in modal
JK> If I don't really understand how "necessarily true" and "sufficiently true"
> for modal logic, could someone tell me where I am wrong?
> The reason I think that these would be useful in an ontology is that if you
> a predicate like "weightOf" which is used in one of these 8? pools about
> instances of a class, and you have some instance of #$Thing which you don't
> know if it is an instance of a class, but you do know that the predicate
> make sense for the instance, you can immediately reject it as being an
> of the class. And I guess you could say you know that it is an instance of
> NOT-class (is that called a ~Cat in this example?) that John Sowa talks
> about in some of his writings. (032)
Interesting question. (033)
> If you don't actually have a true or false value for each sentence, (and
> for each predicate-on-some-values ) but you have some kind of fuzzy truth
> instead, I guess you can use a FUZZY-AND and a FUZZY-OR logical connector
> above on the sentences. The thing I don't know is how do you know what
> fuzzy-truth number to attach to a particular sentence or predicate? It
> seems that
> the fuzzy-truth operations sound good on a slide show, but aren't really
> in real knowledge base. (034)
JK> Could someone enlighten me please about how you actually do this fuzzy logic
> stuff ? (035)
Most importantly, the degrees are determined through inductive means.
They are reasoned over with fuzzy logic (result [0,1]), vs crisp logic
(result (0,1)). (036)
uA(x) = membership degree of x in class uA
A fuzzy union = max( u1(x), u2(x) )
A fuzzy intersection = min( u1(x), u2(x) )
A fuzzy complement is = 1 - u1(x) (037)
Other operators are: commutative, associative, distributive.
We can apply DeMorgan's Law, Law of Contradiction, and Law of Excluded
Middle. Note that Excluded Middle is fuzzy, in that it relies on a
degree of 0, not exclusion, so A ^ ~A is not empty, just has members
with degrees of 0. (038)
Logical reasoners are the standard implication, if then rules, and
modus ponens, with the fuzzy operators above. (039)
>> JK> similar attributes - how is this different that having a "certain
>> > relationship"
>> > aren't attributes relationships too? what makes the "certain
>> > relationship"
>> > special enough? and don't attributes refer to something else as well? (040)
BG>> In FOL, predicates can be properties or relationships. The
>> interpretation would be: properties are atomic; attributes can be
>> structures themselves. As structures, they are relationships with
>> other objects.
> so relationships refer to "external" entities and properties refer to
> "internal" entities? (041)
At this point I'm not sure if it's that concrete, but it's along those lines. (042)
JK> Does each class have a list of each so you can tell the difference? Is a
> property just a monadic predicate? What kind of structure are you thinking
> of? (043)
I'm not sure what the transformation between contexts will be at this
point. Any ideas? Perhaps something I can interpret via morphism, to
track a predicate's change. (044)
>> JK> I assume "type of X particle" is another way of saying the particle
>> can be
>> > assigned to the class X. Why do you use the word "type" here instead of
>> > class?
>> I was defining the "type of" context for R10, not the class of X. In
>> my very limited example they are either as a 'category' or 'instance.'
> I'm still confused, could you elaborate a little more? (045)
It was an example of a **possible** solution, but mostly to
demonstrate the concept. Let me leave it at that for now. (046)
BG>> It's the Heisenberg uncertainty principle. (047)
JK> That's the principle I was thinking of. So even if you aren't really talking
> about particles, is there a similar principle that holds between some set of
> I guess this would have to be a discussion in its own right... (048)
The transitivity example is one such relation. The key is to create a
stop condition for recursively calling cyclic conditions. How? Where?
That's to be determined. (049)
JK> Do you envision associating with each individual thing in the computer
> the reasons why that individual is an instance of a particular class? (050)
It's represented by an acyclic graph, so a branch for a three-legged
dog can be reached from both <Dogs> and <three-legged-things>
Irregular as demonstrated here: http://www.jfsowa.com/logic/math.htm#Lattice (051)
Essentially, any partially ordered set. I don't recall the name, but
in the early 1900s someone proved or tried to prove with some success
that every relation is a lattice (or something to that affect). I'm
not sure if this was proved not to be true, but I know that Tarski
proved lattice theory is unsatisfiable. So it's not as simple as
convert to a lattice. Or perhaps somemone proved him wrong at this
point. I'm not sure. (052)
>> BG>> If I implement some uncertainty measures, something like fuzzy logic,
>> >> at some level in the system, most likely at the last steps of
>> >> inference, I could dynamically calculate class membership or inclusion
>> >> based on empirical data.
>> JK> I assume you mean class membership in the particles that are acting in
>> > a certain way historically, which is what you were talking about before.
>> > Why would inference, fuzzy logic or uncertainty even come up?
BG>> I can classify something based on fuzzy sets. I can say that a Gray
>> Cat belongs 50% in the Black Cat set, and 50% in the White Cat set. (053)
> That kind of makes sense. I don't know what statistical inference really
> means. Have you got a link I could follow to read more about it?
> Is this the same thing as Bayesian probablilties where you can tell
> the difference between whether something is an actual conclustion
> or just a coincidence? I seem to recall something like that in school. (054)
A note on my previous statement:
"I could dynamically calculate class membership or inclusion based on
I am thinking of this resembling Logic and Functional programming:
[Bundy A. at el. "Survey of Automated Deduction"] (055)
The quick answer is http://en.wikipedia.org/wiki/Statistical_inference
But yes, it's like Bayesian probabilities. Regarding conclusion vs
coincidence, I'm not sure what you mean, but I believe coincidence can
be treated as a type of error. (056)
A great book on Data Mining is "Data Mining, Practical Machine
Learning Tools and Techniques" by Ian H Witten and Eibe Frank. 2nd Ed.
Also WEKA, Waikato Enironment for Knowledge Analaysis is a great tool
for running different datasets against numerous data mining
algorithms, including classification. I believe WEKA is written by
the two authors. (057)
JK> so I guess reaction as you use it here is a metaphor based on a chemical
> reaction? Does the metaphor allow for things that are catalysts in that they
> aren't actually part of the reaction, but when they are around they make the
> reaction happen faster? I guess if you are using a reaction as a metaphor
> for the logical inferencing process, there could be some facts that make
> the inferencing happen quicker.
> Such as if you know who someone's father is and who that person's
> father is, you could conclude who the original person's grandfather might
> be. (058)
This would be determined through reasoning alone. (059)
JK> Of course if you have a candidate grandfather, you still have to find out
> if the father's father is the same person as the the candidate grandfather.
> It could be that you have two aliases for the same person, or that the
> candidate grandfather is actually not the paternal grandfather at all, but
> is the maternal grandfather. (060)
This sounds like a good candidate for it, yes. (061)
> But maybe you aren't using a chemical reaction as the basis for your
> metaphor and you mean something else like an allergic reaction or
> a physics action-reaction as the basis for what you are trying to say? (062)
I wouldn't commit to any domain at this point, but all are very good
candidates. Anything where empirical methods would help, I suppose. (063)
JK> Nah, numbers are boring unless you have a mathematical ontology.
> Use cats and dogs, and maybe even mooses. (moosen? meese?) (064)
Here in Canada, I still struggle with Geese vs Canadian Geese. Another story.
Mathematical theories are good to use here because they have an
abundance of axioms and real world observations to try this out on. (065)
BG>> Yes, but I want to use that same relation R10 to express transitivity
>> in different contexts. For that I need a new clause (R2), which checks
>> the context.
> Is this definition of context fundamentally a different approach than
> attaching the
> pools of statements to a class like I outlined above? It feels like
> in the same way that context-sensitive left-hand-sides exist in a
> grammar, but which kind of left-hand-sides are disallowed in context-free
> grammars. (066)
I'm not sure. I'm rusty on context free grammar, as it's been a
while. Can you elaborate? (067)
BG>> Yes Fido would have its own tree, but since Fido is an instance, it
>> would be compared to the predicates of the 'instances' context, which
>> have a hierarchy of classification rules., separate from the context
>> of the subclass/superclass hierarchy.
JK> Whoa. Fido would have its own tree? Are you saying Fido is a tree?
> I thought it was a node in a tree, or actually the name of two nodes,
> one in each tree? (069)
Well, Fido would have to be represented in this system somehow, but
notice that I said it would be compared to the predicates of the
'instance' context. I was perhaps cheating because I'm not sure just
yet how an instance is represented here. (070)
JK> You also just threw in a hierachy of classification rules rather
> than a hierarchy of nodes. This is moving from a taxonomy to a formal
> ontology, I think. Are these classification rules generalizations of each
> other, or are they independent of each other? Are these classification
> rules the pools that I talked about earlier? or some other beastie? (071)
Yes, it is. they may be generalizations of each, but I don't think it
would be a pool of predicates. (072)
BG>> R10 tells you whether A and B have a transitive relationship between them.
> um. did you mean to say that the nodes (classes) of A have a transitive
> relationship between
> each other, and there is a transitive relationship between the nodes
> (classes) of B?
> or did you mean there is some kind of tree that has trees (A & B) as nodes,
> and there is some
> kind of relationship between those trees that is transitive? (073)
It would be a variation of the latter, because the A & B trees are not
branches of one tree, not at first at least. If a sufficient
similarities have been found, then they are one and the same. We're
trying to map one tree to another, and looking for similarities. (074)
JK> To my way of thinking, Fido can't "exist as an instance". Fido has to
> exist as an instance of Some Class. I thought K-9 and Animal have
> a subclass/superclass (class-generalization) relationship between them.
> In my mind, if X is a subclass of Y, then it doesn't make sense to say
> that X is an instance of Y. Hmm. maybe I need to take that back. I
> can imagine saying that #$Dog is a collection of individuals, thus each
> of those individuals are instances of the class #$Dog. So since each
> of those individuals are also instances of the class/collection #$Thing,
> then #$Dog is a subclass of #$Thing. But #$Dog is an individual as
> well as being a class/collection, so it is a an instance of #$Thing as
> well. Is there any other counter example? My gut says that #$Thing
> as the Universal Set gets a special case that other class/collections
> can't get, notably because #$Thing is an individual as well, and is
> an instance of itself. (That recursive kind of relationship that started
> this whole thread) (075)
This is back to the distinction between the class *inclusion*
relationship and the class *membership* relationship. If that cleared
it up, then skip the next part, otherwise let me clarify. Forget about
Fido for now, and any instances of Dog in the real world. Lets talk
about class inclusion. (076)
All we have are subclass/superclass relationships.
So We have:
-> Cat (077)
So Dog is a Species, Species is a Animal.
Since subclass/superclass are transitive, we can say
Dog is a Animal
** same for Cat (078)
So right now, Species has 2 subclasses, Dog and Cat. So the
cardinality of Species is now 2. Let's call it Sc.
Now we add another Species, Mouse
Now Sc = 3. (079)
Again, forget about Fido for now. Dog, Cat and Mouse, are instances in
the Collection of Species, and there is 3 of them (Dog, Cat, Mouse).
And each has a transitive relation to Animal. The use of the word
instance means ***instances of subclasses*** of Species. This is where
the usage of the word instance gets confusing. (080)
Now lets look at class membership.
Fido is a member of the Dog collection, and Dog has it's own
cardinality, lets call it Dc.
Now we have instances of Dog, Fido and Pooch, and Dc = 2.
Let's add Spot. Now Dc = 3. (081)
Here's the difference. When we added a new Species, Mouse, we
increased the cardinality of subclasses of Animal, Ac, as well as Sc.
When we added Spot, Dc incremented, but Sc and Ac did not. Why not?
Because adding to Sc is different then adding to Dc. Dc takes
instances of dogs, Sc takes ***instances of subclasses** of itself,
like Mouse, Giraffe, etc. (082)
And subclasses of Species are transitive, via the subclass track, up to Animal.
Instances of Dog are not transitive to Animal's cardinality of subclasses. (083)
There is the cardinality A-RWc instances of real-world things, like
Fido, which we call Animals. And adding Spot the Dog, or Mindy the
Giraffe, will add A-RWc.
Adding to Giraffe to Species, will add to Ac. (084)
BG>> In the **structural categories** context it does not hold. In the
>> **instance** context it does. (085)
JK> This mostly makes sense. But one question...
> How do I know what context I'm in? Is this where Situational Logic comes in?
> I wasn't too sure what it was and what people mean by it. (086)
I'm not sure how I will qualify the context at this point. The changes
in the predicate is the first attempt. I listed some ideas in my
JK> So this "type" property is some kind of property whose value is the name
> of another property? Is it a single valued property? or is it possible that
> the sentence is true where
> ThereExists FOO such that (FOO.type='instance') Logical-AND
> (FOO.type='category') (088)
At this point I'm not sure. Type itself might not be the right term,
because as you pointed out, type should identify a class, which is
misleading. It will be some kind of connection to context though,
checked via something like R10. (089)
JK> Hey, I'm getting some of what you are saying... (090)
That's great (091)
BG>> I needed to use specialization here. It's the
>> predicates that change from context to context. (092)
JK> Specialization of the predicates or specialization of the classes? (093)
Specialization of the classes, via predicates. I need to look this up
to see if there is a connection already. (094)
Thanks for the comments!
MSc Candidate, '10
Dept. of Computer Science
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
To Post: mailto:ontolog-forum@xxxxxxxxxxxxxxxx (096)