By the way, Fabian Neuhaus, Oliver Kutz, and Till Mossakowski are very well known folks here at Ontolog.
Thanks,
Leo
From: Obrst, Leo J.
Sent: Sunday, October 11, 2015 8:32 PM
To: 'Thomas Johnston' <tmj44p@xxxxxxx>; [ontolog-forum] <ontolog-forum@xxxxxxxxxxxxxxxx>
Subject: RE: [ontolog-forum] FW: FW: CfP 11/16/2015: Knowledge-Based AI Track at 2016 FLAIRS
Tom,
I think that the quests for lexical and semantic fields are reasonable, but I also think that the quest for sub-word components (sometimes called componential
analysis, or identifying sub-word features that are not phonological or morphological in nature; I personally think these are ontological notions) is largely a willow-o-wisp, but “blend” into those field notions.
Word sense issues, notorious for their (linguistic and other) theoretical intractability, to me demonstrate only the fluidity (contextual, functional) nature
of words and their “types”. My personal guess is that words accommodate to the semantic fields of their contexts (possibly even construed as ontologies). A relatively distinct (to this point) notion is that of “blending”, sometimes called “conceptual blending”,
going back recently to Goguen et al’s works [1, 2], Fauconnier [3], etc., and recent ontological construals such as Neuhaus et al [4]. I personally favor the formal approaches here, i.e., [1, 2, 4], but stuff still needs to be worked out. So it’s fertile
ground. The blending notion does get into stuff you are interested in, I think.
Thanks,
Leo
[1] Goguen, J. Conceptual Blending.
https://cseweb.ucsd.edu/~goguen/papers/blend.html.
[2] Goguen, J. and Harrell, D. F. 2010. Style: A Computational and Conceptual Blending-Based Approach. In Argamon, S. and Dubnov, S., editors, The Structure of
Style: Algorithmic Approaches to Understanding Manner and Meaning, pages 147–170. Springer, Berlin.
http://www.springer.com/us/book/9783642123368. [I reviewed the Goguen & Harrell article: I can provide this to you].
[3] Fauconnier, G. and Turner, M. 2003. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. Basic Books.
[4] Neuhaus, F., O. Kutz, M. Codescu, T. Mossakowski. 2014. Fabricating Monsters is Hard - Towards the Automation of Conceptual Blending. Proc. of Computational
Creativity, Concept Invention, and General Intelligence (C3GI at ECAI-14), Prague, 2014.
http://www.inf.unibz.it/~okutz/resources/Monsters-c3gi-14.pdf.
This thread started with a post from Leo Obrst, on the 15th of last month, announcing the FLAIRS Knowledge-Based AI conference. In my initial response,
a couple of days later, I expressed my interest in lexical and diachronic semantics.
A high-level distinction in semantic theory is that between cognitive semantics and truth-functional semantics. Cognitive semantics is mainly about what
words mean, whereas truth-functional semantics is mainly about what sentences mean. (Lots of caveats and details, e.g. by "words" I mean sub-sentential lexical units, but caveats are only distractions at this point.)
But there is an equally important way to draw the distinction. Cognitive semantics is about meaning as something that is "in the head"; truth-functional
semantics is about meaning as a relationship between language and the world. This is often referred to as the internalism/externalism debate.
A third distinction between the two camps. Following Montague, Barbara Partee and others, truth-functional semantics is deeply involved in developing
logical formalisms beyond the standard propositional and predicate logics, and trying to map more and more of ordinary language into those formalisms. Cognitive semanticists, as my early reading in the field and my recent catch-up reading leads me to believe,
are not very interested in translations into formal languages.
A fourth distinction. Truth-functional semanticists are more inclined to work within some variant of a "language of thought" paradigm, developed by Jerry
Fodor. This is a representationalist approach to semantics, one in which thought is something like doing logical inferences, somehow, in the brain, on concepts which are expressed as predicates and referents, somehow, in the brain. Representationalists are
balanced by another camp, the connectionists, who develop artificial neural networks (ANNs), and whose big names include McClelland and the Churchlands.
Representationalism/connectionism is a different debate than truth-functional/cognitive, but I think they might line up like this. Representationalists
and truth-functional semanticists like logical formalisms. Connectionists and cognitive semanticists both believe that meaning (in opposition to a famous statement of Putnam) is "in the head" and, being there, is a matter of holistic and distributed patterns
which are not easily translated into logical formalisms.
Jerry Fodor thinks that if ANNs get good enough to do deductive reasoning, we will understand that success as their finally being able to physically realize
Jerry's LOT (language of thought). So kudos to Jerry and formal linguistics in general. The Churchlands, on the other hand, doubt that studies of formal languages will help much in the development of increasingly sophisticated cognitive capabilities being
expressed in ANNs. So when ANNs do become expert at deductive reasoning, it won't be thanks to any guidance from Jerry and his friends. So kudos to McClelland and the rest of the San Diego connectionists.
And now for a word on diachronic semantics. Truth-functional semanticists have no interest in how words (such as the predicates in their formalisms) change
meaning over time. More generally, deSaussure sounded the death-knell for 19th century diachronic semantics just as firmly as Chomsky sounded the death-knell for Bloomfieldian empirical semantics, as well as for the sadly primitive attempts at a behavioristic
semantics by Skinner.
deSaussure said that the important thing to study was the meaning of words as they are right now, at a given place in time. This is synchronic as opposed
to diachronic semantics. And his position seems intuitively right, for how could we study how meaning changes until we understand what meaning is? I don't have an equally plausible way to frame a countervailing intuition, but I do think that deSaussure was
wrong in this. I think that neither diachronic nor synchronic should be studied without reference to the other, that the best approach is a back-and-forth one.
And so I come to the concept of the "drift" of "semantic fields" which Leo Obrst introduced me to. Skipping caveats, and tolerating the circularity, semantic
fields are groups of words closely related in meaning, and semantic drift is the change in meaning over time of these semantic fields.
I believe that semantic fields exist in the brain. Jerry Fodor might not object to that, but I also believe that they exist as distributed patterns in
the brain (e.g. synchronized firings of a distributed set of neurons, at given frequencies). Jerry wouldn't like that. He thinks we will find discrete localized representations of predicates in the brain.
In my unpublished work from 40-50 years ago, I tried to develop a theory of the meaning of lexical units, and of how they change meaning over time. I
wrote a brief statement of that theory -- brief even for a high-level summary (!) -- in my postings in this thread on the 16th and the 19th of last month.
As I resume work on that theory, I would like to post additional "brief statements" which discuss some of the many current issues in semantics, in relation
to that theory, issues such as semantic holism, prototype vs. classical theories of meaning, externalism vs. internalism, the encyclopedia/dictionary relationship, word-pair patterns in corpus linguistics, evolutionary pressures on the development of language,
the development of language in children, and so on.
My hope is to get some helpful responses from those, such as Leo, with backgrounds in linguistics (and/or philosophy and/or ontology specifically). For
those who do not have strong backgrounds in these fields of study, but who might find some of these discussions interesting, my recommendation is to read a little before taking part. The IEP (Internet Encyclopedia of Philosophy) provides short overviews of
some of these topics; and as I've said before, the SEP (Stanford Encyclopedia of Philosophy) provides (usually) excellent more in-depth treatments.
On Tuesday, September 22, 2015 4:53 PM, "Obrst, Leo J." <lobrst@xxxxxxxxx> wrote:
A short answer to your last question:
But right now, there is one thing I would like your view on. Does statistics-based
distributional semantics treat all co-allocation patterns as equal (ruling out, of course, the patterns of young children, many second-language users (certainly not Joseph Conrad though!), and the generally only quasi-literate)? That is, does it lack my notion
of co-allocation patterns lying along a synthetic to analytic range of dispositions to accept or reject co-allocation patterns?
is no. I find much of distributional semantics more of an engineering approach so far, lacking theory. As you, I don’t think colocation
patterns, degrees of substitutability, etc., ala Firth’s "a word is characterized by the company it keeps", common contexts of word tokens, are even close to the whole story. Degrees of expertise in use of language, as you suggest, varies wildly.
I'm coming across the same leads, right now from (i) Geeraerts "The Theoretical and Descriptive Development of Lexical Semantics", published in Behrens
& Zaefferer, The Lexicon in Focus, and (ii) Paradis, "Lexical Semantics", in The Encyclopedia of Applied Linguistics (2012).
I'm conversant with the SEP article "Concepts" (as well as other SEP articles in the same "semantic field" (8>) such as "Categories", "Theories of Meaning",
"Word Meaning" and several dozen more). So I don't want you to think I'm a complete novice in semantics. I'm pretty solid in philosophical discussions of semantics, but out of touch with recent linguistics work, especially the distributional semantics you
have provided helpful references for. That's where current work in linguistics seems to overlap the most with my own resusicated/ongoing work in semantics, and where I need the most help.
A specific linking point between the statistics approach you have brought to my attention, and my own work, is the importance of the joint attribution
(or assent to, if queried) of pairs of lexical items on a specific occasion. If the statistics approach counts all such joint attributions equally, then I have something to add, for I do not think that all cases of joint attribution are semantically equal.
As I wrote on the 17th, "the semantic forces which account for the statistical patterns discussed in your references are still what interest me the most."
Some joint attributions (or willingness to assent, if queried) correspond to empirical generalizations, synthetic a posteriori statements. Others correspond to accepted semantic rules, analytic a priori statements. The more one is willing to say, of a joint
attribution pair "Of course. There just aren't any exceptions to that.", the more analytic their joint attribution is. Correlatively, if the joint attribution of one of that pair together with the negation of the other, is met with the response "No, that's
not true. In fact, it couldn't be true", that is further evidence for the analytic status of the joint attribution.
To take a worn-out example, "Mike's a bachelor, but he's married" will almost universally be met with the response or disposition to respond "That can't
be so. All bachelors are unmarried. That's just part of what being a bachelor is." Or, as we linguists/philosophers might also put it, that's just part of what "bachelor" means.
So co-allocation patterns alone don't get to the semantics. Each instance of co-allocation of a lexical pair falls on a continuum from "That appears to
be so" to "That must be so", in which the "must" means that any negation of the co-allocation is a semantic mistake, a mistake in understanding what the two lexical items mean (albeit a mistake which the folk will often say is a mistake in understanding what
the referents of the lexical items are).
The continuum, of course, reflects dispositions which correspond to neural states of collections of neurons. What else could be the cause of those dispositions?
What lexicographers do is guess at the group-aggregate level current dispositional state of these co-allocation patterns, and enshrine as definiens and definiendum those pairs where the statistically aggregated dispositional patterns cluster at the "That must
be so" end of the spectrum.
I don't expect that you will comment on each of my messages in this thread (or each of my private messages, if we go private on this). In so kindly offering
to help me, you certainly weren't signing up for anything that time-consuming and out of the academic mainstream.
But right now, there is one thing I would like your view on. Does statistics-based
distributional semantics treat all co-allocation patterns as equal (ruling out, of course, the patterns of young children, many second-language users (certainly not Joseph Conrad though!), and the generally only quasi-literate)? That is, does it lack my notion
of co-allocation patterns lying along a synthetic to analytic range of dispositions to accept or reject co-allocation patterns?
If it does lack this notion, then I think I have something worth continuing to work on. If not, what's happened is just that I'm out of touch. And possibly,
you will tell me that it's not exactly one nor the other.
On Tuesday, September 22, 2015 10:03 AM, "Obrst, Leo J." <lobrst@xxxxxxxxx> wrote:
The notion of semantic fields goes back to earlier generations of linguists. John Lyons [1] devotes a chapter (ch. 8) in the first volume
to that subject, providing a history that primarily points to Trier’s work in the 1930s, but which built on others.
Often there is some confusion between “semantic fields” and “lexical fields”, with the former being primarily conceptual (dealing with
meaning) and the latter lexical (as one might think, dealing with form).
Until recently linguistic semantics primarily focused on “externalist” issues, as you call them, i.e., word, sense, and reference. And
most realist ontologists are externalist. But with the rise of corpus linguistics and statistical models of language, distributional semantics has emerged. In addition, of course, for some time, there has been “cognitive” linguistics (Lakoff, Fauconnier, and
many more, up to and beyond Gardenfors) and “cognitive” semantics. These latter trends are mostly “internalist”, hence often more concerned with lexical semantics and cognitive “concepts”. In the literature I often find cognitivists as kind of reinventing
the notions and theories of semantics primarily derived from Frege, Russell, and the formal strain, leading today to that built on model-theory: see, e.g., [2, 3], mentioned previously in this forum.
[1] Lyons, John. 1979. Semantics 1 & 2. Cambridge: Cambridge University Press.
[2] Margolis, Eric; Stephen Laurence. 1999. Concepts: Core Readings. Cambridge, MA; London, England: The MIT Press.
To begin with, what do I understand a semantic field to be? I need to clarify this because the _expression_ "semantic field" is new to me, and so my understanding
of it will not be the understanding that the current linguistics community has (or, at least, not yet).
I'll begun with the Wikipedia reference Leo provided. (After that, I want to move on to Jackendoff.)
That reference says: "In linguistics, a semantic field is a set of words grouped semantically (that is, by meaning), referring to a specific subject."
This is ok to get started with, but it's a pretty loose definition, and for two reasons. First, it uses "meaning" (in "grouped ... by meaning") in an
_expression_ intended to help define an _expression_ which contains "semantic" (namely, "semantic field"). Second, "specific subject" is vague. For any collection of words, semantically-related or not, we can always find a "specific subject" for them by "ontological
ascent". For any noun phrase, isn't "thing" (or, perhaps, "thing" or "stuff") a "specific subject" (or, at least, a shared subject)?
I'd like to come up with a definition of "semantic field" that would be clear enough to bear significant theoretical weight. My current stumbling point
is that I can't find a way to draw the boundaries of semantic fields. That would mean a criterion that would distinguish between (i) a given set of semantically-related lexical items ("words" for short) constituting a semantic field, from (ii) other words
related to those words, but not part of that semantic field. In other words, the definition I want would answer the question "Why are some semantically-related words 'inside' a given semantic field, whereas others, related to them, are not?"
This, perhaps, is what "specific subject" is supposed to do, to make this distinction. Perhaps the idea is that, for a selected group of semantically-related
words, there is a lowest-level (most specific) ontological category that they can all be predicated of. Those words, plus that category, are a semantic field relative to that category, and a subset of the (unspecified) maximal collection of words also predicable
of that category.
And what kind of semantic connections count for binding words into a semantic field? I'd like to think that we can distinguish between "attractive" and
"repulsive" semantic "forces". Two antonyms have a repulsive force between them; two synonyms have an attractive force between them. Two homonyms may or may not have any semantic connections. Two polysemic senses of the same word have an attractive force.
Over time, though, the polysemic force between them may weaken so much that it "breaks", and the result is a pair of homonyms which may then continue to semantically drift apart.
Hyponymy may seem to be an attractive semantic force, since words in the same branch of a semantic tree are thereby related. But, by the same token, words
on different "twigs" of the same branch would thereby have a repulsive semantic force. In other words, two sub-trees can be pulled out from any multi-level semantic hierarchy and, from the point of view of those sub-trees, there is a negative semantic force
between any pair of words consisting of one word from each sub-tree. But conversely, any pair of sub-trees in a semantic hierarchy are part of some higher-level tree and, from that perspective, have an attractive semantic force.
But I'm still not ready to say that this discussion provides me with a theoretically well-grounded concept of a semantic field because any definition
of "semantic field" still seems to me to run in too-tight a circle. So this leads me to provisionally treat "semantic field" as a heuristic, meaning that it is any collection of words that the person positing the semantic field feels are semantically related
to one another, and all related to (predicable of?) a "specific subject" (which is itself a lexical item).
So: provisionally taking the concept of a semantic field to be a heuristic, my answer to Leo's question is that I am not interested in semantic fields
per se, but that I am very interested in "developing a lexical theory that would account for such". Which would mean, among other things, providing a theoretical context which would support
a rigorous definition of "semantic field" and which would, among other things, (i) define rigorous boundary conditions for semantic fields, and (ii) semantic relationships among semantic fields, etc.
I want to stay with semantic fields awhile longer. So I'm now moving on to Ray Jackendoff, in particular (i) Semantics and Cognition (1983), and (ii)
Foundations of Language (2002). But as well as the use he makes of semantic fields (especially in the later book), Jackendoff takes a position on the internalism/externalism debate. He is, albeit in a qualified sense, an internalist about semantics, a position
for which meaning is "in the head". Philosophers, on the other hand, often take an externalist stand, in which meaning is a relationship between words and things.
I'd like to develop the notion that this distinction is a (very big) tempest in a teapot. (In doing so, though, I suspect that the best I will be able
to do is provide a series of footnotes to Kant.)
On Saturday, September 19, 2015 12:20 PM, Thomas Johnston <tmj44p@xxxxxxx> wrote:
(This is my second and last catch-up post.)
The research of yours that you mention is right in my own field of interests. I would appreciate any url links you have to that work.
Concerning the "drift" of "semantic fields" that you refer to, the question that interests me is "What causes the drift?" I think that perhaps there are
two things that account for it.
The second, derivative, factor is statistical. Semantics in a language is derivative on, ultimately, the state of each individual's semantic net at a
point in time. Continuities in those semantic net states over time and over small to large communities of speakers are the necessary condition of the use of language to communicate. Those statistical patterns are periodically formalized in both general and
special-purpose dictionaries, whose definitions are based on guesses that lexicographers make about the statistical patterns in those states over person and over time, and the aggregate state reached in a given language community at the point in time the definitions
are formulated.
The first, basic, factor is the plasticity of the human brain. Over time, and under the influence of what we read and of our verbal communications with
others, we evolve neurally-based dispositions to use, and to accept as valid, pairs of words/expressions.
At the weaker end of the dispositional spectrum for a given pair of expressions, we have dispositions that correspond to statements expressing supposedly
factual regularities. These are Kant's synthetic a posteriori statements, empirical generalizations open to revision.
At the stronger end of the dispositional spectrum, we have dispositions that correspond to statements expressing accepted linguistic conventions. These
are Kant's analytic a priori statements, ones not open to revision (except by revising the semantic rules the statements express).
This dispositional spectrum is what accounts for the analytic/synthetic continuum, and what explains why, as Quine demonstrated, there is no analytic/synthetic
dichotomy (thus nailing shut the coffin of logical positivism).
Which brings to mind an example provided by A. J. Ayer. He used the statement "Loadstones attract iron" as an example, and suggested, quite reasonably,
that this statement must have begun life as an empirical generalization, but ended up as an analytic statement, reflected in the fact that we just wouldn't count anything as a loadstone if it didn't attract iron.
In an example I discussed in my dissertation, a semantic key to the debate between pro- and anti-abortion advocates is whether or not a fertilized human
egg is a human being. For anti-abortionists (at least those of a religious persuasion), "A fertilized human egg is a human being" is what Kripke would classify as an analytic a posteriori statement -- analytic because no empirical evidence would be counted
by them as evidence against the statement, and a posteriori because it is about "matter of fact".
Reverting to Ayer's example, the semantic drift of the pair "loadstones" and "attract iron" was driven to an increasingly strong disposition to disallow
counter-examples. Certainly this semantic drift ended up with the disposition to disallow counter-examples first in the scientists of that day, then eventually in the wider community of speakers. The drift, for this pair of expressions, was driven by the increasing
scarcity of statements purportedly expressing counter-examples. Each person's semantic net evolved over time from a state in which counter-example statements would be at least considered, to a state in which they were no longer considered.
For each such person, over time, patterns of linguistic usage strengthened the connection between the two terms until the statements expressing those
connections became "true by definition". Across increasingly wider linguistic communities, the statistical aggregation force led to lexicographic revisions of the relevant dictionaries.
But this drift across the dispositional strength spectrum is not itself free. "Loadstones" and "attract iron" are each connected with a large number of
other potentially co-occurring expressions, some of which would pull (via the dynamism of neural connectivity) against that particular drift of that particular pair of expressions. And so I reach a pale wash of neural metaphor over Quine's conceptual holism.
I'd like to know if you can tie this informal description of the proto-theory I have been working on for four decades to academic work you are familiar
with.
Regardless, thanks for already responding to my original message to you.
Leo, you asked me, earlier, to clarify some of what I said above, explaining in more current terminology what my interests are, and where they are situated
in the corpus of current work. I am busy reading your Geeraerts references right now, and hope, with that, to be able to respond in a week or so.
I have a lot of catching-up to do!
On Saturday, September 19, 2015 12:05 PM, Thomas Johnston <tmj44p@xxxxxxx> wrote:
OK Leo. Here's my re-post, to the forum, of what I've posted earlier to you.
For others, the topic I'm concerned with in these postings is lexical semantics -- the semantics of sub-sentential expressions. Also, I have begun reprising
some of my ancient notes on this topic, and they may be difficult to follow because over three decades of work in lexical semantics have taken place since I wrote them. And so there may be terminology I use that needs clarification. Also, my current research
in this field is at a rudimentary level, and so there may be developments that, once I am aware of them, will point out the error of my ways.
Contributions from others in this forum with interests in lexical semantics are most welcome.
And so, what I wrote earlier was this:
Thank you for the overview of distributional semantics which, in fact, I was unaware of. Also for the references you provide.
The first thing that comes to mind is that the lexical use patterns which these statistical techniques will certainly reveal
/ have revealed need a theory, an explanation. Recently, I have gone back to fairly extensive unpublished material which I wrote in the 70's and 80's. Combining it with my current research
and notes, I have a corpus of work which I provisionally think might form the basis of such a theory.
Currently, I'm trying to figure out how much of Gardenfors' new book, The Geometry of Meaning, has already anticipated my work on such a theory. Unless
you have already concluded (as I nearly have) that his conceptual spaces won't carry all the weight he puts on them, you might find this book of his interesting.
I learned my compositionality lessons from Jerry Fodor's extended development and defense of the Language of Thought. But what still fascinates me is
the psychological phenomenon of a child's progression from pointing and naming to his construction of elementary sentences. It seems to me to be one of the miracles/mysteries of human intellectual achievement. I look forward to an ANN account of compositionality,
from which point of view Fodor's symbol-based LOT will be seen as an abstract description of what neural networks do, and which will begin the process of removing the mystery from this miracle.
But the semantic forces which account for the statistical patterns discussed in your references are still what interest me the most. I can get into the
literature starting from those references, of course; but if there is anything else you know of that is specifically concerned with a theory of lexical meaning (and lexical meaning change), please let me know.
(There is one more catch-up post I will publish to the forum, and then we're all starting from the same page.)
On Friday, September 18, 2015 7:50 PM, "Obrst, Leo J." <lobrst@xxxxxxxxx> wrote:
[I originally posted this just to you, but we have agreed to share our exchange and invite others into the discussion.]
As you undoubtedly know, recently there’s has been the emergence of so-called “distributional semantics” these days in computational
linguistics/NLP. This is based on corpus linguistics, i.e., large-scale statistical, but light-weight “knowledge-based” or formal linguistic/semantic methods.
Distributional semantics: words as “meaning” the contexts/collocations they can occur in, what I consider kind of Witgenstein 2 (Investigations,
not Tractatus) in nature.
Some folks are trying to combine distributional semantics with more formal compositional semantics, the latter of which has not focused
primarily on lexical semantics, but rather on the composition of words into higher forms and their semantic interpretations, i.e., going back to Montague in the late 1960s. E.g., see [1, 2]. Also, for a good overview of lexical theories, see [3]. For a
new type-based approach, see [4].
However, there are potentially some useful emerging approaches in ontology research, i.e., quality (or value) spaces, and so-called semantic
reference spaces/ranges, especially [5-6]. We are using this in our current clinical care healthcare ontology research (forthcoming), which can combine quantitative and qualitative quality value spaces, so that, e.g., nominal qualities (“named” qualities;
think of “low/medium/high X”) can be mapped into a quantitative range, though imprecisely, given that you have some ordering on the regions. I am myself thinking of something similar to this for so-called “semantic fields”, i.e., that one can begin to think
of these spaces and their points/regions as “drifting” over time.
[3] Geeraerts, Dirk. 2009. Theories of Lexical Semantics. Oxford University Press.
[4] Asher, Nicholas. 2011. Lexical Meaning in Context: A Web of Words. Cambridge: Cambridge University Press. 2011.
[6] Probst, F. 2008. Observations, measurements and semantic reference spaces. Applied Ontology 3 (2008) 63-89.
The description of knowledge-based vs "statistics"-based AI is an entry point into so much that I am interested in. Perhaps the distinction will eventually
cease to be a distinction between the association of ontologies with the former but not with the latter. Perhaps the distinction will eventually become one between software systems which are given human-developed ontologies, and software systems which abstract
their own ontologies from the patterns they discover by naming those patterns and then developing additional patterns by playing with set-theoretic quasi-random combinations of those named patterns.
And the way we play these set-theoretic games in our heads, I have suspected for a long time, is by means of background processes in which Venn-diagram-like
representations of those labelled patterns are what is actually manipulated, at close to the neural level. (This is a bit like Gardenfors, but also quite a bit unlike him.)
I suspect this is far too brief, and also far too off the beaten academic path, to be more than vaguely suggestive, if even that. The interests that it
hints at are (i) my interest in lexical semantics, vs. what I see as a one-sided concentration on the semantics of statements on the part of philosophers especially; and (ii) my interest in diachronic semantics, i.e. in how the semantic web that is embodied
in the neurochemical web of our brains evolves over time, as it obviously does.
I'm tempted to just erase this. But I feel among friends here, and so I've decided that it doesn't matter if this sounds foolish. And if there are others
here to whom this is not completely non-sensical, I'd enjoy hearing from you, and especially hearing about the "core bibliographies" relevant to this stuff that you are currently working with.
On Tuesday, September 15, 2015 12:27 AM, "Obrst, Leo J." <lobrst@xxxxxxxxx> wrote:
-----Original Message-----
FLAIRS Knowledge-Based AI Track
Special Track at FLAIRS-29, Key Largo, Florida USA
In cooperation with the Association for the Advancement of Artificial
Paper submission deadline: November 16, 2015.
Notifications: January 18, 2016.
Camera-ready version due: February 22, 2016.
All accepted papers will be published as FLAIRS proceedings by the AAAI.
What is Knowledge-Based AI?
After an early dominance in AI, especially in NLP, approaches based on
engineered knowledge-resources such as rule bases and ontologies modeling
(part of) the world, the statistical winter set in in the 1980s. Fueled by
increased computing power, statistics-based replacements for modeling the
world, including machine-learning and neural networks, lead to early
successes before they hit their ceiling and resulted in algorithmic arms
races. To get AI out of these trenches and back into mobile warfare,
knowledge-based methods have not only been being paid lipservice to in
the many "semantic" revolutions, but actual applications have been built,
often with complementary methodologies that paired statistical and
knowledge-based solutions.
The scope of the track includes research, proof-of-concept and industry
applications in the area of knowledge-based AI, i.e. systems whose
functionality is informed by computational knowledge resources
(ontologies, lexicons, semantic networks and/or knowledge bases). While
the knowledge-based AI is often juxtaposed to the statistics-based AI, we
see the contrast as unnecessarily exclusionary, in that systems combining
the intuitive directness of knowledge representation with the efficiency
of statistics-based computation have distinct advantages.
What is the GOAL of the track?
To showcase recent knowledge-based theories, methodologies, and
applications in AI and to foster new approaches of this kind, also those
paired with statistical and machine-learning approaches.
The scope of the track includes research, proof-of-concept and industry
applications in the area of knowledge-based AI, i.e. systems whose
functionality is informed by computational knowledge resources
(ontologies, lexicons, semantic networks and/or knowledge bases). While
the knowledge-based AI is often juxtaposed to the statistics-based AI, we
see the contrast as unnecessarily exclusionary, in that systems combining
the intuitive directness of knowledge representation with the efficiency
of statistical approximation have distinct advantages.
What kind of studies will be of interest?
Papers and contributions are encouraged for any work relating to
Knowledge-Based AI. Topics of interest may include (but are in no way
€ spreading activation networks
€ applications in knowledge-based AI
€ hybrid probabilistic/machine-learning & knowledge-based systems
Note: We invite original papers (i.e. work not previously submitted, in
submission, or to be submitted to another conference during the reviewing
Interested authors should format their papers according to AAAI formatting
guidelines. The papers should be original work (i.e., not submitted, in
submission, or submitted to another conference while in review). Papers
should not exceed 6 pages (4 pages for a poster) and are due by November
16, 2015. For FLAIRS-29, the 2016 conference, the reviewing is a double
blind process. Fake author names and affiliations must be used on
submitted papers to provide double-blind reviewing. Papers must be
submittcoued as PDF through the EasyChair conference system, which can be
accessed through the main conference web site
EasyChair login - your EasyChair account information is hidden from
reviewers. Authors should indicate the [your track name] special track for
submissions. The proceedings of FLAIRS will be published by the AAAI.
Authors of accepted papers will be required to sign a form transferring
copyright of their contribution to AAAI. FLAIRS requires that there be at
least one full author registration per paper.
Papers will be refereed and all accepted papers will appear in the
conference proceedings, which will be published by AAAI Press.
€ Christian F. Hempelmann, Texas A&M University-Commerce,
€ Gavin Matthews, NTENT.com,
€ Max Petrenko, NTENT.com,
€ Christian F. Hempelmann, Texas A&M University-Commerce,
€ Elena Kozerenko, Russian Academy of Sciences,
€ Gavin Matthews, NTENT.com,
€ Max Petrenko, NTENT.com,
€ Victor Raskin, Purdue University,
€ Julia M. Taylor, Purdue University,
€ Tony Veale, University College Dublin,
€ Yorick Wilks, University of Sheffield & IHMC, Florida,
€ Michael Witbrock, VP for Research, Cycorp.
Questions regarding the Knowledge-Based AI Special Track should be
addressed to the track co-chairs:
€ Christian F. Hempelmann, Texas A&M University-Commerce,
Questions regarding Special Tracks should be addressed to Zdravko Markov,
Conference Chair: William (Bill) Eberle, Tennessee Technological
Program Co-Chairs: Zdravko Markov, Central Connecticut State University,
Special Tracks Coordinator: Vasile Rus, The University of Memphis, USA
Paper submission site: follow the link for submissions at
Christian F. Hempelmann, PhD | Assistant Professor of Computational
Department of Literature and Languages
Texas A&M University-Commerce
P.O. Box 3011 | Commerce, TX 75429-3011
The Texas A&M University System
_______________________________________________
Dr. Leo Obrst The MITRE Corporation, Information Semantics
Voice: 703-983-6770 7515 Colshire Drive, M/S H317
Fax: 703-983-1379 McLean, VA 22102-7508, USA
|