ontolog-forum
[Top] [All Lists]

Re: [ontolog-forum] What the difference re., Data Dictionary, Ontology,

To: ontolog-forum@xxxxxxxxxxxxxxxx
From: Kingsley Idehen <kidehen@xxxxxxxxxxxxxx>
Date: Sat, 15 Feb 2014 17:06:50 -0500
Message-id: <52FFE4FA.5050402@xxxxxxxxxxxxxx>
On 2/15/14 3:46 PM, David Eddy wrote:
Kingsley -

On Feb 15, 2014, at 3:03 PM, Kingsley Idehen wrote:

 What's the MVC pattern about, for instance? It breaks a system down into constituent parts that deal with: presentation, orchestration, and data representation.

Since you started with a blank sheet of paper, you have had the luxury to follow MVC.  A lot of systems do not have those divisions.  I didn't encounter it until 1981.

The MVC acronym (like most things) is/was a crystallization of patterns that originated in the 60's (or even earlier). The fundamental point is that you can loosely couple presentation, orchestration, and modelling of data.




The SYSTEMS are the machine tools that produce the end product, DATA.

See my comments above, you are misunderstanding me, repeatedly, and simply refusing to accept this fact. 

I have yet to see you present something in the OpenLink/RDF/OWL stack that deals with analyzing systems. 

We are about analyzing data that's accessible via open standards. In some cases, we make drivers/providers/transformers for databases (that underlie applications and services) using relevant data access standards which enables integration in regards to new usage scenarios e.g., HTTP based Webs of Linked Data that leverage RDF based entity relation semantics.

 I my focus and passion is all about providing access to data used existing systems. I've never looked as any Semantic Web technology stack component as a replacement for existing systems. It's all about interoperability with minimum disruption (if any) to what exists.

Analyzing & presenting the data produced by systems, yes.  The systems no.

Maybe I'm just stuck on Jim Hendler's statement that his work (of which I assume your work is a taking advantage) has nothing for legacy systems.  Has he changed his position?

I am not Jim Hendler, and I am sure you can pose that question to Jim Hendler.




I am reminded of wisdom from the 1840s when industrial America was learning how to make things.  It was noticed that building quality into the manufacturing process is far more efficient than inspecting defects out.

Fine, but I don't see how this point is relevant i.e., there's no new insight in the comment above, from my vantage point.

So you can just take any useful looking data source & stuff it into those 50B triples & not worry about the quality & consistency of said data? 

Why on earth would I imply that?

You don't have to do any cleanup?

My demonstration is actually about the ability to leverage provenance, inference etc., conditionally. Basically, you can look at the data as is (one lens) or apply reasoning and inference (another lens) or perform all the aforementioned without or without provenance.

This is about lego-like flexibility. Nothing is forced on you in regards to the data state. This isn't about edicts since my "horses for courses" remains my preferred doctrine.






You are looking at this whole affair from the inside out. I actually look from the outside in, 

You're saying that outside in is superior to inside out?  I would argue both are necessary.

I am not talking about "superiority" or 'inferiority" . I am talking about perspective as it applies to data integration.




because we have new technologies constantly being introduced in an innovation continuum;

Best comment I've heard on this... "Big Data?  My customers can't handle Little Data."

Your customers can handle whatever kind of data their computing resources facilitate.




I am not talking about a "Data Dictionary Product" . 
I am talking about how data is defined. 

So precisely where are you going to keep & maintain the data definitions?

Today, they can live in an Ontology that may or may not be published on the World Wide Web.


Just the definition of the data?  Ignoring the systems that process/produce the data?


Why on earth would I do that?

If I am seeking to make legacy systems data accessible -- in some new usage context -- I have to create what boils down to a mapping between the two realms. This happens all the time, when you get beyond the distracting noise around the Semantic Web and Linked Data.


We are in a continuum, nothing is static. Change is good.

Change terrifies a lot of people, particularly when it's expensive & risky.

You can loosely couple change i.e., you can introduce new perspectives that aren't mutually exclusive in regards to what exists.

Our industry tends push "rip and replace" (overtly and/or covertly) even though the customers seeks "incorporation of new and emerging technologies with minimum (if any) disruption to what exists". This is the consequence of marketeers and programmers dominating the discourse, the incentive for both is to "rip and replace" rather than loosely couple and integrate.



I just saw a Gartner note taking the position that ERP systems installed—and heavily customized—over the past 15 years are the new legacy systems.

Yes they are, but it doesn't render the systems useless. In my experience code is like wine, the older code is superior to the newer code.


Are you aware Massachusetts attempted to replace its revenue system by customizing a CoTS package & walked away after spending $46M?



This is a common story across enterprises and governments. They are always tricked (sadly) into futile "rip and replace" adventures.


Any DBMS system can implement an open standard for data access circa. 2014. 

Are you referring to the DBMS engine or the application(s) built with the DBMS?

What's the payback to the business to retrofit a silo to current standards?

The utility of the legacy system remains i.e., you don't rip it out and then replace with a far inferior re-write -- devoid of all the subject matter expertise and experience wired into the older solution.






That's what Ontologies do. And this already works. The challenge is getting people to actually look at what's working instead of constructing endless conjecture laden threads . 

Legacy systems are not conjecture.  They're what enable our lives.

Legacy systems aren't conjecture. I was referring to your characterization of the challenge .


How long does it take to install an ontology for a 43 year old legacy system.

Anywhere from 5 minutes to 43 years.

 What would be the value?

Keeping the utility of the legacy system alive without going down the futile "rip and replace" path.


____________________________
David Eddy
Babson Park, MA

deddy@xxxxxxxxxxxxx



-- 

Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software     
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen




Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


_________________________________________________________________
Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/  
Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/  
Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx
Shared Files: http://ontolog.cim3.net/file/
Community Wiki: http://ontolog.cim3.net/wiki/ 
To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J    (01)

<Prev in Thread] Current Thread [Next in Thread>