ontology-summit
[Top] [All Lists]

Re: [ontology-summit] Reasoners and the life cycle

To: "'Ontology Summit 2013 discussion'" <ontology-summit@xxxxxxxxxxxxxxxx>
From: "Chris Partridge" <partridge.csj@xxxxxxxxx>
Date: Mon, 7 Jan 2013 15:24:30 -0000
Message-id: <00eb01cdeceb$17326220$45972660$@gmail.com>
And there is a further complication. There is whole literature devoted to the 
question whether experts have a representation of their expertise ( one source 
is Knowing How and Knowing That: Gilbert Ryle - Proceedings of the Aristotelian 
Society, New Series, Vol. 46 (1945 - 1946), pp. 1-16 
http://www.scribd.com/doc/26910585/Ryle-Knowing-How-and-Knowing-That another 
Rethinking Expertise by Harry Collins). Of course opinions are divided on the 
matter.    (01)

This highlights one problem bedevilling data modelling and ontology; where 
expertise is often 'knowing how' rather than 'knowing that' and hence experts 
cannot articulate their expertise 'accurately' - in the sense of producing a 
representation that coverts easily into logical form.    (02)

Chris    (03)

> -----Original Message-----
> From: ontology-summit-bounces@xxxxxxxxxxxxxxxx [mailto:ontology-summit-
> bounces@xxxxxxxxxxxxxxxx] On Behalf Of David Leal
> Sent: 07 January 2013 15:09
> To: Ontology Summit 2013 discussion; 'Ontology Summit 2013 discussion'
> Subject: Re: [ontology-summit] Reasoners and the life cycle
> 
> Dear Matthew, Fabian and others,
> 
> I strongly agree with Matthew's assertion that if you ask any domain expert
> whether or not any non-trivial statement is true/accurate, then you will get 
>an
> answer that begins "it depends ...".
> 
> Fabian says: "There is tons and tons of knowledge, which is completely
> uncontroversial, e.g. about the structure of proteins or the physical 
>properties
> of substances (collected in huge reference databases of material science)."
> Where the "physical properties of substances" are concerned with their
> suitability for engineering structures, we are very much into "it depends ..."
> territory.
> 
> More importantly, I think that we need to take this head on and make sure that
> we know how to state things such as:
> - statements A are known to be valid within domain X, but it is not known
> whether or not they are valid within domain Y;
> - statements B are known to be consistent with statements A within domain X
> and are valid within domain Z, but statements A and not valid within domain Z.
> 
> We also need to address the possibility that future knowledge may invalidate
> what we hold to be true. We do not need to go into engineering to illustrate
> this, instead consider ownership under English law (probably the law in most
> countries is similar). You may make a statement "I own X"
> and you may sincerely believe it to be true because you paid Fred good money
> for X. But if Fred stole X, then you don't own it. Nor do you own it if Fred 
>bought
> it from Dick who stole it.
> 
> So actually the statement "I own X" means something rather vague - I did not
> steal X, and nobody claiming to be the rightful owner has yet challenged my
> ownership. However in some domains, the statement may be taken as being
> more absolute than that.
> 
> Best regards,
> David
> 
> At 12:31 07/01/2013, Matthew West wrote:
> >Dear Fabian,
> >
> >Yes, we use the word "accuracy" differently. I am not married to the
> >term, but I believe the concept that I denote with the word "accuracy"
> >is important. Since I am not in the
> >word-smithing business, let's just use "accuracy_F" for my notion of
> >accuracy and "accuracy_M" for yours until somebody comes up with
> >prettier terms.
> >
> >Here are my arguments accuracy_F is wider applicable than accuracy_M,
> >and why I think accuracy_M if defined by "closeness to truth" has its
> >flaws.
> >
> >(1)   A test for accuracy_M requires an answer
> >to the following question: "Is the axiom X close enough to the truth
> >for the purpose to which it being put", where the purpose derives from
> >the requirements of the application that the ontology is part of. In
> >absentia of a given purpose it does not make sense to ask whether an
> >axiom is accurate_M.
> >MW: Well really it comes in two parts. Firstly you can say what the
> >accuracy is, so PI to  3SF, or to 5SF. That does not change. When it
> >comes to assessing the ontology for a particular purpose, you need to
> >look at our requirement and whether it is matched.
> >
> >Okay. So what is the accuracy of "The world is a sphere?"
> >
> >MW2: Well I would expect that to be expressed as the % difference in
> >volume of the sphere proposed as a model of the world, with a maximum
> >difference between the model and reality. Of course WGS84 is an oblate
> >sphere that is used by the GPS system, and most maps and charts these
> >days, but it is still only an approximation.
> >Indeed, in the Oil Industry when I was last aware of what was
> >happening, different oblate spheres were used for different parts of
> >the earth to give the best local approximation.
> >
> >It seems that you are now contradicting yourself. Above you said that
> >accuracy_M does not change, but that only when it comes to assessing
> >the ontology for a particular purpose one needs to consider whether the
> >accuracy_M is sufficient for the requirements of that purpose.  But now
> >you are saying that accuracy_M of the "The world is a sphere" cannot be
> >evaluated independently of a given location or purpose.
> >
> >MW3: I don’t see how you reach that conclusion from what I have said.
> >The evaluation of the accuracy of the world as a sphere will come up
> >with the same answer each time you evaluate it (using the same method).
> >What may change is whether the accuracy means the model is fit for
> >purpose. Probably good enough for movement of the planets, but not for
> >drilling for oil in this case.
> >
> >
> >What is the accuracy of "All birds can fly"?
> >
> >MW2: That is untrue. I don’t think there is a way to say “Most
> >birds can fly” in logic, which is a shame.
> >
> >Well, you cannot say it in OWL or classical first-order logic. There
> >are more logics than that. As I mentioned in my last email, fuzzy logic
> >is used to represent these kinds of ideas.
> >
> >However, if I understand you correctly, you claim that "pi = 3.14" and
> >"The world is a sphere" are not false (but somewhat true, although not
> >absolutely true) but "All birds can fly" is false. Where is the
> >difference?
> >
> >MW3: Well, I would only say “pi=3.14 to 3SF” is true, without the
> >statement of accuracy you might reasonably expect absolute accuracy,
> >and then it would be false.
> >
> >MW3: As far as “All birds can fly”. I take words like 
>“All” and
> >“None” to have absolute meaning, certainly as used in logic. So 
>if
> >it is not all, it is inaccurate to say that it is.
> >
> >
> >As we have discussed before, there are
> >ontologies that are not developed with a
> >specific application in mind (e.g, Gene
> >Ontology, Foundational Model of Anatomy). I
> >would argue that in these cases the notion of
> >accuracy_M is not applicable. But even if you
> >think that there are some hidden requirements
> >based on implicit assumptions on the future use
> >of these ontologies, it would be very hard to
> >identify requirements for these ontologies that
> >would allow you to evaluate the accuracy_M of
> >these ontologies. So even if accuracy_M in these
> >cases is theoretically defined, there is no way to measure it.
> >
> >MW: I disagree rather strongly. I think there is
> >a clear purpose for these ontologies, which is
> >to be an integrating ontology over some scope,
> >which is actually surprisingly narrow. So for
> >example, you would not consider using them for
> >engineering applications, or for product sales.
> >
> >You are saying: "The purpose of the ontology X
> >is to be an integrating ontology"? Okay, let's
> >take a step back. The whole reason why we are
> >talking about the purpose of an ontology is
> >because the purpose is supposed to give us the
> >requirements that we can use to evaluate the
> >ontology. E.g., if somebody asks "What is the
> >purpose of building the new road between X and
> >Y?" a possible answer is "To reduce the traffic
> >congestion", and one can evaluate the design of
> >the road by studying traffic, use of existing
> >roads etc. But your answer is analog to "The
> >purpose of this road is to be a highway."
> >
> >
> >That's not its purpose, that's what it is. For
> >example, the fact that the FMA is an
> >(integrating?) ontology of human anatomy does
> >not tell you anything about the relationships it
> >is supposed to include. It does not tell you
> >anything about the requirements for representing
> >developmental change in human anatomy etc.
> >
> >MW2: If an ontology is intended to be an
> >integrating ontology, it has some consequences
> >for how that ontology is developed, and has some
> >properties that you can determine to see if it does that successfully.
> >
> >Matthew, with all due respect, but that is
> >rather vague. My argument was that accuracy_F is
> >more widely applicable than accuracy_M, because
> >measuring accuracy_F does not depend on
> >requirements, which either absent or at least
> >hard to nail down for reference ontologies. And
> >unless you are able to get more specific than
> >"some consequences" and "some properties" I think you are proving my point.
> >
> >MW3: You continue to confuse requirements with
> >accuracy itself. The requirements just give rise
> >to the properties, such as accuracy, that are
> >going to be important to know the value of in
> >establishing whether an ontology is fit-for-purpose,
> >
> >Here is a concrete example: Assume I am
> >developing a reference ontology for the
> >canonical male adult human anatomy. The current
> >version of my ontology contains the following
> >axioms. The $64000 question is: Is it accurate?
> >- T1 instance_of thoracicVertebra
> >- T2 instance_of thoracicVertebra
> >- T1 connected_to T2.
> >-  thoracicVertebra subclass Vertebra
> >- If x instance_of Vertebra, then x part_of spine.
> >
> >Now, I can easily tell you whether this ontology
> >is accurate_F, and how to find out: all you need
> >is a textbook on human anatomy. Is it
> >accurate_M? And if so, how did you find out?
> >
> >MW3: It seems to me that this is just a question
> >of whether this is an accurate representation of
> >some alleged knowledge. It does not tell you
> >whether that alleged knowledge is true or not.
> >
> >MW3: Is that the essence of accuracy_F? I can
> >quite see that that is a useful property, but
> >equally not everything that can be said about
> >accuracy. Accuracy_M is supposed to bridge from
> >the ontology to the reality that you wish were a
> >model  (model theory sense) of that ontology. I
> >can quite see that that could have elements that
> >were the accuracy of representation of a theory,
> >and the accuracy of the theory as a
> >representation of the world, and that would be a useful separation.
> >
> >
> >In contrast, at least in the cases of scientific
> >ontologies that cover established knowledge in a
> >domain it is pretty straight forward  to test
> >for accuracy_F: just ask domain experts whether the axioms are true.
> >
> >MW: So is Newtonian Physics true?
> >
> >
> >
> >Newton thought that his results, e.g., the laws
> >of motion, are universal laws that describe the
> >behavior of all objects in the universe. In this
> >sense, Newtonian Physics is false. As a theory
> >about gravity, mass, weight of the slow, medium
> >sized objects people encounter in their daily lives, it is true.
> >
> >MW2: Now you see I would rather say that it is
> >accurate for engineering purposes provided you
> >do not travel at speeds greater than X.
> >
> >(Of course, domain experts might be wrong, so
> >this is not a fool-proof approach to measure
> >accuracy_F, but then again no measurement technique is without flaws).
> >
> >MW: Actually, this is a really bad argument.
> >Most scientists would agree that the current
> >scientific theories are just those that have not
> >yet been proven wrong, and particularly in the
> >field of physics there is a constant expectation
> >that the current set of theories may be
> >overturned by some insight (indeed there is good
> >evidence that they cannot be correct). Hence the
> >well known saying “All theories are wrong, but
> >some are useful”. That gets you back to
> >accuracy_M where you  need to say “useful for what?”
> >
> >I don't know why you think that this is relevant
> >to what I wrote. But what most scientists would
> >agree to is that all scientific knowledge is
> >falsifiable. However, this does not mean that
> >the current scientific theories are "just those
> >that have not been proven wrong yet", let alone
> >that scientists assume that all theories are
> >wrong. There is a vast difference between falsifiable and false.
> >
> >Anyway, I was not talking about   philosophy of
> >science, but just making a point that there is a
> >measurement technique for accuracy_F, namely
> >asking scientists whether the content of the ontology is true or false.
> >
> >MW2: I suggest you are unlikely to get either of
> >those as an answer. You’re much more likely to
> >get an answer like “it depends...”
> >
> >This is just not true. There is tons and tons of
> >knowledge, which is completely uncontroversial,
> >e.g. about the structure of proteins or the
> >physical properties of substances (collected in
> >huge reference databases of material science).
> >
> >Scientists obviously disagree on cutting edge
> >stuff and argue about that, but even to have
> >that argument they need to assume a lot of things for granted.
> >
> >MW3: OK. I will accept that I am biased by my
> >own  discipline, engineering. Perhaps in other
> >fields knowledge is well established and
> >unlikely to be challenged. However, engineering
> >has a significant element of empirical theories,
> >where it is not only well known that theories
> >have limited applicability, but people’s lives
> >can depend on the theories not being used
> >outside their range of applicability. So I
> >don’t think we will get a lot of credit when
> >say a plane crashes because the wrong theory was
> >used somewhere, and the engineers complain
> >“But we used the Ontology Summit evaluation of
> >the ontology and that said it was accurate.”
> >
> >
> >The challenge for you is to come up with a
> >measurement for accuracy_M. According to what
> >you wrote above, these are actually two
> >questions: How do you measure the accuracy? And
> >how do you measure whether the accuracy is
> >"close enough" to a given purpose of an ontology?
> >
> >MW2: I notice that I am talking about accuracy
> >in a quantitative sense, and you are talking
> >about it in a purely logical sense. For
> >quantitative accuracy, there is a state of
> >affairs that your model represents, and your
> >accuracy is rather simply the difference between
> >that state of affairs, and your representation of it.
> >
> >MW2: I think logical accuracy is actually
> >harder. You can clearly say that logical
> >inconsistency means your axioms are inaccurate,
> >but what is your basis for saying that they are
> >accurate? What do you compare them to?
> >
> >Since I am  not a domain expert, I ask somebody
> >who knows. The ontologies that I am talking
> >about are representing often text-book
> >knowledge, always established knowledge. Now
> >there is the possibility that later one finds
> >out that the scientific consensus at a time was
> >wrong in that case the accuracy_F measure would
> >be faulty. However, that is very rarely the
> >case. The big scientific ontologies are pretty
> >stable, and the changes that are made are
> >usually because somebody made an error in
> >coding, not because the science was false at the time.
> >
> >MW3: Again, Accuracy_F seems to be about
> >accuracy of representation of a theory.
> >
> >If you have to logical theories for the same
> >thing, which are inconsistent with each other,
> >but both work (3D vs 4D would be an example) how
> >do you state the accuracy of these? If they are
> >both accurate do you accept both of them? How do
> >you account for their being inconsistent?
> >
> >
> >Rudolf Carnap published in his early years a
> >wonderful article called "Ãœberwindung der
> >Metaphysik durch logische Analyse der Sprache"
> >(roughly: Overcoming metaphysics by logical
> >analysis of language), where he argues that all
> >of ontology consist just of pseudo-problems,
> >that ontological assertions are strictly
> >speaking meaningless, and are at best
> >expressions of an attitude towards life without
> >any scientific relevance. Obviously, he
> >overreached there a little bit, but I believe
> >that Carnap would have pointed at the 3D vs. 4D
> >debate as a picture book example for what he was on about.
> >
> >In any case, nobody outside philosophy should
> >care about this debate. The only reason why it
> >got any traction outside philosophy is because
> >the limits of the expressivity of OWL make it
> >hard to represent change over time in OWL.
> >
> >But I don't want to dodge the underlying
> >question: How do we evaluate the accuracy of
> >top-level ontologies? Now, the answer requires some background:
> >
> >Ontologies are an interferentially dependent
> >network of axioms with an intended
> >interpretation (which is document by comments,
> >natural language definitions, pictures etc.) It
> >represent a "Web of Believes" to use a phrase
> >from Quine. Strictly speaking, none of these
> >axioms can really evaluated independently, the
> >semantics of each axiom depends on the the rest
> >of the ontology. But some of the axioms are
> >closer to empirically verifiable observations
> >("Obama is a U.S. citizen") than others that are
> >more theoretical ("parthood is transitive").
> >Most of scientific knowledge that we are
> >representing in ontologies are actually in the
> >middle between these two examples. If we measure
> >the accuracy_F of an ontology, then we compare
> >the observation-close axioms to available
> >empirical observations. Of course, we ontologist
> >usually don't do this directly. What we really
> >do is to ask some expert, who has absorbed the
> >relevant empirical data, and summarizes the
> >scientific consensus at the time for us. But
> >even if we ontologists would go into labs
> >ourselves, we would not be able to  "prove" the
> >accuracy of an axiom. However, if the axiom does
> >not clash with any observations, then that's
> >evidence that ontology is accurate_F. Thus, the
> >theoretical parts of the ontology are not
> >validated directly but only indirectly by there
> >connections to the observation-close parts of the ontology.
> >
> >MW3: Now this seems to be different from what I
> >was understanding above. Here you are talking
> >about how well the ontology matches the real
> >world. Though I do not see why the transitivity
> >of parthood is any harder to test this way than whether Obama is a US 
>citizen.
> >
> >So, with that said, back to the question.
> >Top-level ontologies are obviously very theoretical.
> >
> >MW: I think I would say that they are very
> >general, rather than very theoretical.
> >
> >Thus, it is impossible to measure their accuracy_F directly.
> >
> >MW3: I disagree. See comment about transitivity
> >of parthood above. It strikes me this is rather easy to test empirically.
> >
> >The only way they can be measured is by the role
> >they play as part of larger ontologies. If a
> >top-level ontology is used successfully as part
> >of many scientific ontologies that are all
> >accurate_F, then the top-level ontology is
> >accurate_F. However, this does not exclude the
> >possibility that there is a rival top-level
> >ontology that is equally accurate_F.
> >
> >MW3: I would quite accept that there can be
> >multiple ontologies at any level that are equally accurate.
> >
> >
> >
> >(2)   For ontology reuse accuracy_F is more
> >important than accuracy_M. Imagine person A
> >developed an ontology to a given set of
> >requirements R1 and determined  by thorough
> >testing that the ontology is accurate_M with
> >respect to R1. Now person B considers to reuse
> >the ontology within a different application with
> >a different set of requirements R2. For person B
> >it is completely irrelevant to know whether the
> >ontology is accurate_M with respect to R1.  What
> >B would be interested in is whether the ontology
> >is accurate_M with respect to R2, but that information is not available.
> >
> >MW:  That is just not true. Requirements R2 are
> >met if they are a subset of R1.
> >
> >Yes, in this specific case. But what in the
> >(much more likely case) that R2 are not a subset of R1?
> >
> >MW2: Then either all the requirements that were
> >met were not stated, or the requirements are not met.
> >
> >
> >You are ignoring my point. The point was that
> >accuracy_F is invariant to requirements. Accuracy_M is not.
> >
> >MW3: That is not true,  you have just asserted it without evidence.
> >
> >Thus, in cases where somebody is interested in
> >reusing an ontology that was build with
> >requirements that are not a subset of the
> >original requirements, accuracy_F is useful, while accuracy_M is not.
> >
> >
> >
> >In contrast, since accuracy_F is invariant to
> >the requirements, the information that the
> >ontology has been tested successfully for
> >accuracy_F is valuable to person B. Granted, it
> >is not as good as finding out whether the
> >content of the ontology meets the requirements
> >of B, but it is at least something.
> >
> >MW: Let’s take another example. I have an
> >ontology that says that a thing can be a part of
> >itself. Is it true? The answer will depend on
> >whether you are using a classical mereology or
> >not. So the only answer you can give is “Yes or no”.
> >
> >This is just an ambiguous use of the word
> >"part". Axiomatic mereology was founded by
> >Leśniewski, who was mainly interested in
> >mereology as a substitute for set theory. Analog
> >to subset and proper subset he distinguished
> >between parthood and proper parthood. And this
> >has become the standard terminology for all
> >logicians and formal ontologists. This choice of
> >terminology is a confusing, since the proper
> >parthood relationship in mereology is a better
> >match to the various parthood relationships that
> >we use in daily life. But if we resolve the
> >ambiguity, there is no problem. If by "part of"
> >you mean the relationship that people use in
> >English to describe the relationships between
> >the first half of a soccer game and the whole
> >game or the first two years of Mr. Obama's
> >presidency and the whole first term, then the
> >answer is: no, things cannot be part of themselves.
> >
> >
> >
> >
> >
> >(3)   While the notions of "closer to the
> >truth", "absolutely true" might seem to make
> >some intuitive sense in the context of
> >well-chosen examples, it is very hard to
> >generalize these ideas. I am not talking about
> >the lack of a formal theory, obviously, fuzzy
> >logic provides a theoretical framework it.
> >However, I have yet to encounter any satisfying
> >explanation what a truth-value of 0.35628 means.
> >And there is always the question how one
> >determines the truth-values. Unless you have an
> >answer how to determine whether "The earth is a
> >sphere" is closer to the truth than "All birds
> >fly", I don't think we should rely on these ideas in ontology evaluation.
> >
> >MW: That is the wrong idea altogether. It is not
> >a matter of truth values, and it is fine to be
> >exactly true in Accuracy_M, but being close to
> >the truth is about distance from it, not the
> >probability of being absolutely true.
> >
> >Fuzzy logic has nothing to do with probability
> >(yes, I know wikipedia says otherwise, but that
> >is just wrong). It is a way to formalize the
> >intuition that you expressed: namely, that it is
> >not sufficient to talk about true and false, but
> >that we need to account for distance from the
> >truth. To put it in the terminology you used:
> >the "distance for the truth" is expressed by a
> >value in the interval from 0 to 1, where 0 is
> >"absolute true", 1 is "absolute false".
> ><http://plato.stanford.edu/entries/logic-
> fuzzy/>http://plato.stanford.edu/entries/logic-fuzzy/
> >
> >
> >
> >
> >
> >
> >(4) I believe that the thing you are ultimately
> >interested in is whether the axioms enable the
> >ontology to meets its requirements as a part of
> >a given application. In other words, the
> >important question is: does the ontology provide
> >the functions that it needs to provide to make
> >the whole system work? And this has nothing to
> >do with axioms being true or "close to true", as
> >the following thought experiment shows. Let's
> >assume that the role of an ontology in an
> >application is  to determine whether there is a
> >train connection between two points. (Not the
> >route, just whether there is a connection or
> >not.) In reality, there is a train line from A
> >to B, from B to C, and from C to A, and no other
> >train line. However, the ontology O contains the following axioms:
> >(a) if there is a train line from x to y, x is connected to y.
> >(b) if x is connected to y, and there is a train
> >line from y to z, then x is connected to z.
> >(c) There is a train line from A to C, a train
> >line from C to B, and a train line from B to A.
> >All of axioms in (c) are false. Not "close to
> >true", just plain false; thus these axioms are
> >not accurate_M. Nevertheless, the ontology will
> >perform its function in the application perfectly fine.
> >
> >MW: I don’t think I follow this. You seem to
> >be saying that there is a train line from A to
> >B, but not from B to A. Not quite sure how that makes sense.
> >
> >Yes, I assume for this example that train lines
> >are one-directional. If you think this is
> >unrealistic, just replace "train line" with
> >"one-way street" in the example. The point of
> >the example is that all axioms are false, but
> >that the axiom set will respond to all queries
> >about connectedness with true answers, and thus
> >provides the intended functionality to the
> >application. Hence truth (even closeness to
> >truth) of the axioms in the ontology is not
> >required to enable an application to work.
> >
> >MW: I am reminded of the observation that “The
> >worst possible thing you can do, is the right thing for the wrong 
>reason.”
> >
> >I don't argue that this is a situation that one
> >should strife for. This is a thought-experiment
> >that shows that the ability to function well
> >within the context of an application is
> >logically independent of the accurate
> >representation of reality. Which is why we
> >should keep both concepts apart, and not muddle
> >them in the definition of "accuracy".
> >
> >MW: And that is a distinction I make by keeping
> >accuracy (a property of an ontology) separate
> >from quality (fitness for some particular purpose).
> >
> >Regards
> >
> >Matthew West                     
>Â  Â Â Â Â Â
> >Information  Junction
> >Tel: +44 1489 880185
> >Mobile: +44 750 3385279
> >Skype: dr.matthew.west
> ><mailto:matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx>matthew.west@informati
> onjunction.co.uk
> >http://www.informationjunction.co.uk/
> ><http://www.matthew-west.org.uk/>http://www.matthew-west.org.uk/
> >
> >This email originates from Information Junction
> >Ltd. Registered in England and Wales No. 6632177.
> >Registered office: 2 Brookside, Meadow Way,
> >Letchworth Garden City, Hertfordshire, SG6 3JE.
> >
> >________________________________________________________________
> _
> >Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
> >Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
> >Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
> >Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
> >Community Wiki: http://ontolog.cim3.net/cgi-
> bin/wiki.pl?OntologySummit2013
> >Community Portal: http://ontolog.cim3.net/wiki/
> 
> 
> ============================================================
> David Leal
> CAESAR Systems Limited
> registered office: 31 Shell Road, Lewisham, London SE13 7DF
> registered in England no. 2422371
> mob:            +44 (0)77 0702 6926
> landline:       +44 (0)20 8469 9206
> e-mail: david.leal@xxxxxxxxxxxxxxxxxxx
> web site:       http://www.caesarsystems.co.uk
> ============================================================
> 
> 
> _________________________________________________________________
> Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/
> Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/
> Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
> Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
> Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013
> Community Portal: http://ontolog.cim3.net/wiki/    (04)


_________________________________________________________________
Msg Archives: http://ontolog.cim3.net/forum/ontology-summit/   
Subscribe/Config: http://ontolog.cim3.net/mailman/listinfo/ontology-summit/  
Unsubscribe: mailto:ontology-summit-leave@xxxxxxxxxxxxxxxx
Community Files: http://ontolog.cim3.net/file/work/OntologySummit2013/
Community Wiki: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013  
Community Portal: http://ontolog.cim3.net/wiki/     (05)
<Prev in Thread] Current Thread [Next in Thread>