ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] Ontology based conversational interfaces

To: "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>
From: Alex Shkotin <alex.shkotin@xxxxxxxxx>
Date: Mon, 6 Jul 2015 20:44:54 +0300
Message-id: <CAFxxROTydmDf9oE4W+G6pZ5P6REbHynhicNm-XGRTRzjdBhMhw@xxxxxxxxxxxxxx>
There is a good first page if you google: ontology based speech recognition
By the way, what about our leader - Watson@IBM - does it use speech recognition?



2015-07-06 20:33 GMT+03:00 Rich Cooper <metasemantics@xxxxxxxxxxxxxxxxxxxxxx>:

Here is another paper with a bit more depth:

 

http://pub.uni-bielefeld.de/luur/download?func=downloadFile&recordOId=2278529&fileOId=2674859

 

Here is a snippet on the method they use:

 

In Pythia, natural language expressions are parsed and interpreted with respect to a grammar which we assume to be composed of two parts: an ontology-specific part and an ontology-independent part. The ontology-specific part contains lexical entries that refer to individuals, concepts, and properties of the underlying ontology. It is generated automatically from an ontology-lexicon model, as will be described below. The ontology-independent part comprises functional expressions like auxiliary verbs, determiners, wh-words and so on. The overall picture can be sketched as follows

 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Monday, July 06, 2015 9:17 AM


To: '[ontolog-forum] '; 'Yuriy Milov'
Subject: Re: [ontolog-forum] Ontology based conversational interfaces

 

Dear Alex,

 

From the introduction, I got these snippets:

 

Conversational  interfaces  as  defined  by  Kölzer [Kölzer 1999], let users state what they want in their own terms, just as they would do, speaking to another person.  One   of   the   most   difficult   tasks   in   implementing   a conversational  interface  is  to  interpret  utterances  and  understand  their  meaning.  To  do that,   we   are   using   ontologies.   The   ontologies   play   a   key   role   at   the   semantic interpretation  time  since  the  meaning  of  utterances  can  be  inferred  by  looking  for concepts and their attributes. The use of ontologies for representing domain knowledge and  for  supporting  reasoning  is  becoming  widespread.  The  ontologies  however,  may also be used for facilitating the interaction between user and PA.

 

Later in the paper, they say:

 

we  limited  the  space  of  dialogue  utterances  to directive  speech  act  classes  [Searle  1975]—inform,  request,  or  answer—since  such classes  define  the  type  of  expected  utterances  in  a  master-slave  relationship.

...

In  the  context  of  an  open  conversation,  the  problem  of  understanding  is  complex, demanding a well structured knowledge base. Domain knowledge is used here to further process the user’s statements and for reasoning. To this effect, we are using a set of task and domain ontologies, separating domain and task models for reasoning.

...

The  key components  that  make  up  an  ontology  are  a  vocabulary  of  basic  terms  and  a  precise specification of what those terms mean [Guarino 1998]. Ontologies play two main roles in our PA: a) they help interpreting the context of messages sent by others agents or by the  user  (utterances);  and  b)  they  keep  a  computational  representation  of  knowledge useful at inference time.  The  ontologies  may  also  facilitate  the  process  of  semantic interpretation,  supplying  the  parser  with  linguistics  elements,  like  noun  synonyms,  or hyponyms/hyperonyms. 

 

Figure 1 in the paper models the concept Project, but it is shown in isolation, so how it fits into the picture is undescribed. 

 

So, in general, the paper is slight (4 pages) and so doesn't go into enough depth to really know how they do it. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Alex Shkotin
Sent: Monday, July 06, 2015 1:51 AM
To: [ontolog-forum]; Yuriy Milov
Subject: Re: [ontolog-forum] Ontology based conversational interfaces

 

Rich, 

 

is there any overview of methods used for voice recognition, as this service is really good done today (for Russian too;-) and may be used as input for ontology based part of conversation.

Do they use ontology (of any kind) on their part? I don't think so.

But we may get just good quality of recognition.

My friend is developing read/write assistant using OWL 2 and reasoner. And I am sure to add voice recognition and generation interface is not a problem nowadays.


Alex

 

 

2015-07-04 21:57 GMT+03:00 Rich Cooper <metasemantics@xxxxxxxxxxxxxxxxxxxxxx>:

Here is an abstract from a paper:

 

In this paper we present an ontology-based utterance interpretation in the context of intelligent assistance. Ontologies are used for syntactic and semantic interpretation and for task representation. This mechanism is embedded in a conversational interface applied to personal assistant agents.  The main goal of this approach is to offer a system capable of performing tasks through an intuitive interface, allowing experienced and less experienced users to interact with it in an easy and comfortable way. 

 

The paper's URL is:

http://www.nilc.icmc.usp.br/til/til2007_English/arq0185.pdf

 

And the title is:

"An Ontology-Based Utterance Interpretation in the Context of Intelligent Assistance "

 

The paper is not real deep, but it gives an overview of the authors' approach to the conversational interfaces.  So it's inspirational. 

 

Products like Dragon Naturally Speaking (DNS) have shown that speech to text and text to speech are functional enough to treat as mostly reliable text I/O for a conversational interface.  Add a text based assistant to DNS text I/O, and you get a hearing and speaking conversationalist.  The paper above is focused on the ontology of the agent as used to interpret the user's side of the conversation. 

 

Does anyone have any references on conversational interfaces they would like to share, or any comments on the subject?

 

Another issue is the impersonality of the agent - that's bad.  If you watched the movie "Her", you know the depth of conversational mutual understanding it demonstrated between the (supposedly inhuman) agent and the user. 

 

There are lots of ways that people respond to simple stimuli - ways that are used by salesman to get your attention swung toward the product or service they sell.  They work a certain small fraction of the time, so with large volumes of conversation, they can be studied as case histories of conversational actions.  With a database of conversations to interpret, some knowledge can be gleaned. 

 

But the Hollywood-like addition of art, and elegance, and plot, and interest, and music and video, among other attention demanding tactics, give publishers more ability to steer the conversation in ways that the user appreciates, and to avoid topics or facts that the user finds cause him dissonance. 

 

Is anyone else on the list concerned with conversational interfaces and personal agents?  If so, please speak up and share references!

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, July 04, 2015 10:24 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Dear Bruce,

 

You wrote:

In USA politics, do the Republicans “sense the same world” as the Democrats? 

 

Many republicans seem to view freedom and property rights very highly, and consider that the way that the poor can grow with all of us is best expressed in the free market, which has been getting less free with every change of government.  And republicans are well positioned to accept money from wealthy political cause promoters.  Nearly all are wealthy people, with a few not so wealthy (yet). 

 

Many democrats appear to see poor people in vivid memories of their own, such as Bernie Sanders' stories of growing up with inadequate resources. In every case I am familiar with, the dems don't give much of their own money, but they want to take money from other people, and give said others' money to the poor.  That is why dems work through government instead of private industry.  Surprisingly, the dems get rich giving your money to poor people.  Al Gore has billions, the Clintons are hundred millionaires, ...

 

Other democrats seem to invent various *ways* to give other people's money to the poor, and often the receiving poor seem to include the politicians themselves, who get a whole lot more of the money than the poor get. 

 

Does Supreme Court Justice Scalia see “the same world” as Justice Sotomayor? 

 

Clearly not, as per the last supreme court decision and Scalia's indignant statements about that decision. 

 

Is [it] that people do not “see the (entire) world” – but only selected parts of it? 

 

IMHO, we each see an amazingly tiny, small part of the world, and the part we each see is as unique as our memories. 

 

And those selected parts are of course different?  Is it values that causes them to see separate parts? 

 

Values, IMHO, result from our processing of those memories.  We can be taught some values, though we have to learn others experientially, but in the vast majority of cases, it seems to me that our values are different also, if only in small regions.  We can agree on "similar" experiences we share with each other.  However, those small regions of divergence still cause a whole lot of trouble. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Bruce Schuman
Sent: Saturday, July 04, 2015 9:19 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Rich wrote:

 

“What I do believe is that we sense different worlds because of our diversity of sensing and interpretation.  I can only interpret things that I have some past experience with.  Any my past experience is very different from even my neighbor's experience, or your experience, or JFS's experience.  The world is so frigging big, and so frigging complex, that we will probably never focus so tightly each to see the others' sense of the world.

 

“That is, whether I sense the same world as you sense (I think I most probably do) doesn't really matter.  The WAYs in which we sense the world are not exact, not even approximately equivalent, so that it is less important than my understanding your views and beliefs about the world, or than your understanding my views and beliefs, because we have so much trouble aligning along those axes. “

 

Yes.  And seen at the “macro-plane” – the big simple variables that actually impact our collective social lives (unlike, for example, quarks) – this view would seem obviously true.  In USA politics, do the Republicans “sense the same world” as the Democrats?  Does Supreme Court Justice Scalia see “the same world” as Justice Sotomayor?  Is that people do not “see the (entire) world” – but only selected parts of it?  And those selected parts are of course different?  Is it values that causes them to see separate parts?  Does your choice of a television news channel or newspaper affect your  perception?  Does a trained surgeon “see” something different in an X-ray than a layman?

 

Is a political issue (e.g. same-sex marriage) “part of the world” -- ?  Certainly, we cannot say that a political issue has no empirical reality.  An issue, too, is a kind of “thing” – albeit an abstraction or concept or shared factor in collective decision-making.

 

I am an avid psych lit reader, but not a psychologist.  From my readings, I think most of what we experience is a reactivation of our memories, comprising a jambalaya of objects that are in some way linked either to the present stimuli, or to other memories of other linked stimuli. 

 

I think of it as a DAG (directed acyclic graph) of AND and OR nodes with a "~" prefix to calculate the complementary NOT. In total, an AND/OR graph, with symbols and functions with parameter lists, all represented in the DAG. 

 

And something like this structure organized in an individual human mind creates a “world view” – a kind of interpretive lens through which we view the world

 

The linkage, according to Chomsky, is a stored pattern with empty slots, or variables, that we fill in with bits and pieces of the current situation.  We see this newly filled-in pattern, in many ways like the matching pattern along with links, within links, ..

 

Yes – and the choice of that structure – what “pattern” it is – is highly free-form and adaptive.  Not only “which bits and pieces” are selected to fit into it, but how they are organized – and how they come together to form a “world view” or interpretive lens.

 

So do we inhabit a commonly shared world? 

 

We can never know that.  We can share our knowledge and observations with other agreeable agents, and they with us, and we can even run confirmatory experiments to confirm or deny our own view of a theory, theirs or ours.  But we can't really know if it is the SAME experience we have, or an experience of the SAME situation, because we are different observers, each with our own vast library of biases. 

 

Is this a problem that evolution must inevitably confront?  I’m involved with many deeply holistic conversations around the world, and there seems to be a common movement arising in different ways in many places towards an improved sense of community, a sense that we are all in this together, that this issue of interpretive fragmentation (and the inevitable confrontation) must be overcome – that forces of evolutionary cultural psychology are pushing in this direction – generally under the influence of globalization.  In some sense, perhaps naively utopian, this perspective supposes we must all somehow become “agreeable agents”.

 

Mystical and religious approaches often underlie this sense of broad inclusion in the context of diversity.  But these approaches are highly holistic and perhaps somewhat “wordless”.  What about very concrete specific differences and collaboration/trust/cooperation around specific concerns – or political issues?

 

“we have so much trouble aligning along those axes.”

 

And we have no shared or consensual model of those axes.  My instinct is – the deep holism of religion and mystical spirituality DOES begin to offer intuitive guidance on this possible shared common structure or alignment.  Many “mystical symbols” point in this direction.  If we wish, we can see the Christian Cross in terms of X and Y axes – and in my world (check out “centering prayer”), I often hear talk about the vertical and horizontal axes of spiritual alignment – and how human beings can align shared understanding through some emerging intuition that seems to be common to many or all traditions.  One term to explore is “Axis Mundi” – the “axis of the world”.

 

IS there such a thing, in some empirical sense – or is this supposed “axis” a synthetic human construct, an artifact of belief, a intentional stipulation?  Are the “tree” and “circle” and “mandala” and “hierarchy” images commonly encountered in mystical spirituality a kind of “pre-mathematical holistic intuition” – an intuitive conceptual stab at a primal ontological mathematics that can help authentically guide or interconnect human beings?  If we believe in an innate wholeness of human thought, perhaps part of the broader task of semantic ontology involves keeping the door open to holistic symbolism.

 

Approached in these broad terms, what is the intuitive meaning of “directed” in these attached images of DAG graphs?  Is there any simple general mapping for any DAG to a one-dimensional interpretation (i.e., every element of the graph can be interpreted as organized in one linear order – ie “from” one point along a single dimension “to” one point along a single dimension?  If so, could that “axis” be in some sense a common center or coordinate origin – despite the high variance in the DAG patterns?

 

The definition of “reachability” in the Wikipedia article seems to suggest the answer is yes.

 

 

Bruce Schuman, Santa Barbara CA USA

http://networknation.net/vision.cfm

 

https://en.wikipedia.org/wiki/Directed_acyclic_graph

 

“In mathematics and computer science, a directed acyclic graph is a directed graph with no directed cycles. That is, it is formed by a collection of vertices and directed edges, each edge connecting one vertex to another, such that there is no way to start at some vertex v and follow a sequence of edges that eventually loops back to v again.

 

DAGs may be used to model many different kinds of information. The reachability relation in a DAG forms a partial order, and any finite partial order may be represented by a DAG using reachability. A collection of tasks that must be ordered into a sequence, subject to constraints that certain tasks must be performed earlier than others, may be represented as a DAG with a vertex for each task and an edge for each constraint; algorithms for topological ordering may be used to generate a valid sequence. Additionally, DAGs may be used as a space-efficient representation of a collection of sequences with overlapping subsequences. DAGs are also used to represent systems of events or potential events and the causal relationships between them. DAGs may also be used to model processes in which data flows in a consistent direction through a network of processors, or states of a repository in a version-control system.”

 

 

cid:image001.png@01D0B635.B2830FC0

 

 

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: Saturday, July 04, 2015 6:47 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Dear Matthew,

 

You wrote: Dear Rich,

So to summarise, you have no proof that we inhabit different worlds.

 

Yes, I have no proof we inhabit different worlds, and I don't necessarily believe we do.  But I also have no evidence that we inhabit the same world.

 

What I do believe is that we sense different worlds because of our diversity of sensing and interpretation.  I can only interpret things that I have some past experience with.  Any my past experience is very different from even my neighbor's experience, or your experience, or JFS's experience.  The world is so frigging big, and so frigging complex, that we will probably never focus so tightly each to see the others' sense of the world. 

 

That is, whether I sense the same world as you sense (I think I most probably do) doesn't really matter.  The WAYs in which we sense the world are not exact, not even approximately equivalent, so that it is less important than my understanding your views and beliefs about the world, or than your understanding my views and beliefs, because we have so much trouble aligning along those axes. 

 

Brian Greene has a very thought provoking video on the 11 dimensions he believes comprise the universe.  Here is his video:

 

https://www.youtube.com/watch?v=YtdE662eY_M

 

Do you think we sense quarks?  I don't.  Our ability to interact with the universe is so extremely limited, and the universe is so vast, that we will likely never be looking at the same part of it.

 

So why assume we do see the same world?  That assumption seems suspect to me. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Matthew West
Sent: Saturday, July 04, 2015 6:09 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Dear Rich,

So to summarise, you have no proof that we inhabit different worlds.

 

Regards

 

Matthew West

http://www.matthew-west.org.uk

+44 750 338 5279

 

 

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Rich Cooper
Sent: 03 July 2015 22:50
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Dear Matthew,

 

You wrote:

In my view it is a really big thing to say that we do not together inhabit some common world. We might experience it in different ways, but to say that what we experience is different is quite another thing.

 

Regards

 

Matthew West                           

Information  Junction

 

I am an avid psych lit reader, but not a psychologist.  >From my readings, I think most of what we experience is a reactivation of our memories, comprising a jambalaya of objects that are in some way linked either to the present stimuli, or to other memories of other linked stimuli. 

 

I think of it as a DAG (directed acyclic graph) of AND and OR nodes with a "~" prefix to calculate the complementary NOT. In total, an AND/OR graph, with symbols and functions with parameter lists, all represented in the DAG. 

 

The linkage, according to Chomsky, is a stored pattern with empty slots, or variables, that we fill in with bits and pieces of the current situation.  We see this newly filled-in pattern, in many ways like the matching pattern along with links, within links, ..

 

So do we inhabit a commonly shared world? 

 

We can never know that.  We can share our knowledge and observations with other agreeable agents, and they with us, and we can even run confirmatory experiments to confirm or deny our own view of a theory, theirs or ours.  But we can't really know if it is the SAME experience we have, or an experience of the SAME situation, because we are different observers, each with our own vast library of biases. 

 

Sincerely,

Rich Cooper,

Rich Cooper,

 

Chief Technology Officer,

MetaSemantics Corporation

MetaSemantics AT EnglishLogicKernel DOT com

( 9 4 9 ) 5 2 5-5 7 1 2

http://www.EnglishLogicKernel.com

 

From: ontolog-forum-bounces@xxxxxxxxxxxxxxxx [mailto:ontolog-forum-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Matthew West
Sent: Thursday, July 02, 2015 3:09 AM
To: '[ontolog-forum] '
Subject: Re: [ontolog-forum] Is Philosophy Useful in Software Engineering Ontologies?

 

Dear Kingsley,

 

 

On 6/30/15 9:21 PM, Chris Partridge wrote:

Not sure this is going to get us far, but I still cannot make much sense of "But the point is that none of it is about objective reality or objective truth.  It is about the world as seen by the people and software that have to communicate." Don't we see/sense the same world?

No we don't.

[MW>] That’s a big statement. Would you care to back it up with some evidence, rather than just assume it is a self evident truth?

That's Ed's fundamental point. The very same point made by John Sowa, Patrick Hayes and others --  in a variety of posts over the years.

[MW>] I’m not sure I’ve heard them say that either. Care to give specific quotes?

 

In my view it is a really big thing to say that we do not together inhabit some common world. We might experience it in different ways, but to say that what we experience is different is quite another thing.

 

Regards

 

Matthew West                           

Information  Junction

Mobile: +44 750 3385279

Skype: dr.matthew.west

matthew.west@xxxxxxxxxxxxxxxxxxxxxxxxx

http://www.informationjunction.co.uk/

https://www.matthew-west.org.uk/

This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.

Registered office: 8 Ennismore Close, Letchworth Garden City, Hertfordshire, SG6 2SU.

 

 



We are individuals for a reason :)

Think of this as the cognition paradox .

-- 
Regards,
 
Kingsley Idehen       
Founder & CEO 
OpenLink Software     
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 

 



_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
 


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>