|To:||Len Yabloko <lenyabloko@xxxxxxxxx>, "[ontolog-forum]" <ontolog-forum@xxxxxxxxxxxxxxxx>|
|From:||Ali SH <asaegyn+out@xxxxxxxxx>|
|Date:||Wed, 16 Nov 2011 12:29:16 -0500|
Dear Len and Simon,|
On Sun, Nov 13, 2011 at 12:02 PM, Len Yabloko <lenyabloko@xxxxxxxxx> wrote:
I don't believe it is simply a technical issue.
Though I will admit that it isn't an analysis conducted at the 10,000 ft level, but perhaps at the 1,000 ft level. While Hume made important observations about the justification for the inductive step based on empirical observation, there are still practical questions today that need to be addressed, even if his philosophical discomfort remains unresolved.
Namely, contemporary scientific practice, especially in a social / collaborative is very much about trying to integrate observations, experiments and hypotheses from numerous people, labs, institutions and sources - it constitutes a complex socio-technical system.
For the sake of argument, let's assume that we don't have access to the true ontological nature (whatever that might mean) of sensible phenomena and the consequent generalizations we make are rooted on imperfect knowledge. (I should note, I am largely in agreement with Searle on this - there really does exist an objective reality, but for the purposes of what follows, one's positions regarding this are immaterial.) Indeed, most scientists are careful to note that they establish correlation not causation - it is usually the media spin that transforms these correlations into causation. Even today, the link between lung cancer and smoking is just a very high correlation, with probable causation. So Hume's problem remains, but does not stop science from experimenting, hypothesizing and revising.
Moreover, all this does not change the fact that different researchers / labs have developed vocabularies to describe the phenomena under consideration and use these vocabularies to form hypotheses about said phenomena. Indeed, I think this is where the more recent notion of ontology (and ontological analysis) can make some important contributions to science and research.
I would first ask - is not one of the goals of the scientific method to create a shared conceptualization of a domain? At the very least, this is almost a tautology in collaborative science. That is to say, the very notion of replicating an experiment and building a body of knowledge about the world revolves around communicating and developing (to some level) a shared understanding of the world.
This isn't to say that given our current level of understanding, there need be a single, unique shared conceptualization. The strength of any conceptualization is a function of how well it correlates with experience, how well it can be used to predict and how well its proponents have argued / expounding on the meme. Nonetheless, a basic goal of the scientific method in a collaborative context is to build on experimental data and share this knowledge.
Namely, for me to be able to reuse your research or results, I need to have an understanding of your conceptualization of the domain and I must be able to map some subset of your vocabulary and assumptions to my own. We may form differing hypotheses or interpretations of the implications of results, but for us to be able to cooperate, there must be some level of mutual understanding.
In this context, each researcher (or lab, or institution or your preferred social grouping) deploys a conceptualization C of the domain using some vocabulary V. When constructing an experiment, I am often constructing a scenario where I can test a hypothesis (usually some sentence S or model M using the language of V and an extension of C) which is predicated on deploying some subset of V and the rules governing the entities under investigation. The results of my experiment either support, undermine or are inconclusive with respect to my conceptualization - more precisely, my experiment uncovers some data which in effect are a model which either satisfy (or not) the assumptions of my (sub)conceptualization. Often, these analysis and experiments establish or reinforce or undermine a statistical correlation...
Let's make this generic example above more concrete. I am a researcher involved in synthetic biology, and I want to work with / contribute to the BioBricks foundation. I've already conducted a number of experiments to determine whether a particular gene G might be relevant to a phenomenon I'm investigation. At this point, I am curious to know what gene G expresses and what regulates it. There are still many unknowns about how genes actually express a protein (or other genes) and how other elements regulate said _expression_. Of course, I'm not going to stop my research and ponder and wait to resolve how exactly I conduct inference on causality. Rather, I make assumptions about the elements of my domain, and use these assumptions to align my thoughts with that of the broader BioBricks (and synth bio) community - at least those people that I read and communicate with. There may exist some sub-culture within synthetic biology that is conducting research with assumptions that are completely at odds with mine.
Now for this collaboration to be successful, I need to share a large subset of the assumptions of the BioBricks project plus background knowledge wrt to molecular biology, gene function etc. And I believe that it is in this vein that the most practical consequences of the linked to article come through.
Especially when it comes to the statistical combination of results, for another researcher to be able to successfully integrate my work with theirs, then they must be able to correctly interpret the basic tenets of my conceptualization, realize where theirs differs from mine, and construct their experiments accordingly.
It is not obvious, it is not easy, but it is necessary.
This is particularly acute in the social sciences when many simplifying assumptions are made and there are so many variables at play. For meta-analyses to succeed, they must account for the differences in the (relevant sub-) conceptualizations, and it is in this respect that ontological analysis can help. One needs to align their particular experimental methods, but also the vocabularies and assumptions used in their conceptualization of the domain.
I'm not going to stop doing science while waiting for the problem of the grounding of induction to be resolved... And the science that I'm doing, involving thousands if not millions of researchers dispersed across the globe requires alignment within the sub-culture with whom I'm sharing my data and combining results.
_________________________________________________________________ Message Archives: http://ontolog.cim3.net/forum/ontolog-forum/ Config Subscr: http://ontolog.cim3.net/mailman/listinfo/ontolog-forum/ Unsubscribe: mailto:ontolog-forum-leave@xxxxxxxxxxxxxxxx Shared Files: http://ontolog.cim3.net/file/ Community Wiki: http://ontolog.cim3.net/wiki/ To join: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J (01)
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: [ontolog-forum] Science, Statistics and Ontolog, Ed Barkmeyer|
|Next by Date:||Re: [ontolog-forum] Science, Statistics and Ontology, Ali SH|
|Previous by Thread:||Re: [ontolog-forum] Science, Statistics and Ontology, Ali SH|
|Next by Thread:||Re: [ontolog-forum] Science, Statistics and Ontolog, Sean Barker|
|Indexes:||[Date] [Thread] [Top] [All Lists]|