Welcome to Gaia! ::

Science is objective?

Of course, Science deals with cold hard facts. 0.48453608247423 48.5% [ 47 ]
No, science is subject to human interpreatation and subjectivity. 0.41237113402062 41.2% [ 40 ]
I don't know. 0.10309278350515 10.3% [ 10 ]
Total Votes:[ 97 ]
<< < 1 2 ... 20 21 22 23 ... 24 25 26 >

I recommend reading New Inquisition by R.A.Wilson for anyone interested in this topic.

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Samadhi23
I recommend reading New Inquisition by R.A.Wilson for anyone interested in this topic.
Could you tell me more about the book? c:

Mega Noob

If you like this thread you'll love this site:

http://www.timecube.com/

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Heimdalr
If you like this thread you'll love this site:

http://www.timecube.com/
I don't know what the ******** that link is trying to say or how it's relevant to this thread.
Science is objective and subjective.
Religion and Government is subjective.

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Rebeldoomer
Science is objective and subjective.
Religion and Government is subjective.
Are you going to provide evidence for those claims? :/
When Egyptian religion ruled the Nile, they had top notch technology. Their understanding of mathematics, geology, biology, medicines, and architecture was unparalleled. Some wonder whether they understood things like Leyden Jars and had crude light bulbs.

The Egyptians didn't consider their beliefs to be superstitions. They didn't consider their observations to be flights of fancy or religion. They considered their way of thinking to be fact. Their "religion" incorporated ideas about the origins of life, the nature of the universe, and their place in it. They applied their knowledge accordingly. They were speculative about many things and when an authority made speculations the underlings considered those speculations to be facts.

Science has become another technologically endowed religion that believes more about itself than reality can substantiate. Durkheim would laugh hysterically at the rituals and priestly robes that have become the popular occult activities of the day. Young scientists aspire for Nobel Prizes, but the committee that grants them also grants them to International Entities like the European Union, and provides "PEACE" medals to war mongers like the US President. Clearly, if that represents the pinnacle of credibility for scientists, then credibility itself has lost its way.
frozen_water
That's a rather big assumption that, based on my experience speaking with professors who actually conduct such research, is inaccurate.


Having conducted such research [though in a different field], it is very accurate. At least if their result actually carries weight. The problem then isn't really with the scientific method, but the inability to test every result that has been produced. Any important result is tested many times, but a result regarding, for example, how to grow an obscure mycoplasma, will go unchecked for a long time because it is something that comes up irregularly [and in those cases, the results published may have been true, but since all the environmental factors are uncontrollable, it is possibly a unique and thus non-scientific results].

Quote:
You skipped to obtain results step there.


Generally, those results are produced by the experiment you conducted and will be numerical in nature [how many horse's clinically have disease X, how many particle decays occurred in time period Y, etc.].

Quote:
And it's only assumed that they draw appropriate conclusions, the only people who could know that their data is off is either someone who is equally well versed on the subject matter at hand or a senior in that field, and again they would be relying upon their seniority, not objectivity, to make such a decision.


They rely upon absolutely true statistical tests. Given numerical data, statistical tools are then used to accept or reject the hypothesis. Since these are mathematical in nature, they are absolutely proven provided that the data is not biased by throwing some out. The seniority and training mostly help in determining which questions raised by the results are worth pursuing and which will likely be dead ends. As such, it adds a bit of bias into what questions are pursued, but not inherently into their answers.

Eloquent Conversationalist

10,225 Points
  • Forum Sophomore 300
  • Money Never Sleeps 200
  • First step to fame 200
I don't think that science should be the deciding factor because some things are general...

There are thoughts and temperaments that may make someone appear to have a condition, yet they may not. Or exude a specific sexuality, and they may be wrong.

Science is pretty much assumption at first, then developing an answer based off of gathered information. And then giving it a constant.

We already know what assuming does. Unfortunately it is necessary to ask a question to someone you don't know, or even suggest a topic to converse about.

But putting faith towards science is being selective about what certain faiths teach. Better still, there is still faith, just against actual faith. It's the same feeling but used on something else.

Hopefully science teaches morals, and filters some assumptions... "Legitimate rape"... Because even if religion isn't real, it provided order to others and made them better people through example, and if example wasn't enough, fear. The same could be said of law. Do it because it's right, or do it to stay out of jail.

I don't know how one thing is okay and another isn't.

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Doubtful Dreamer
frozen_water
That's a rather big assumption that, based on my experience speaking with professors who actually conduct such research, is inaccurate.


Having conducted such research [though in a different field], it is very accurate. At least if their result actually carries weight. The problem then isn't really with the scientific method, but the inability to test every result that has been produced. Any important result is tested many times, but a result regarding, for example, how to grow an obscure mycoplasma, will go unchecked for a long time because it is something that comes up irregularly [and in those cases, the results published may have been true, but since all the environmental factors are uncontrollable, it is possibly a unique and thus non-scientific results].
What makes you think that all legitimate results can be reproduced several times over, or that if something can't be reproduced it's false? Those are both rather big assumptions.

The ability to reproduce something is great, but when it comes to more complex theories, even under the best circumstances reproducing results is difficult.

Quote:
Quote:
You skipped to obtain results step there.


Generally, those results are produced by the experiment you conducted and will be numerical in nature [how many horse's clinically have disease X, how many particle decays occurred in time period Y, etc.].
Just because your result is a number does not mean it's objective. When determine how many horses have a disease, someone must first determine whether or not each individual horse has that disease, and in doing so relies upon their medical expertise.

Or let's say I'm a geologist doing some field work and I'm trying to determine the rock composition in the area. My end result shows that I found a high concentration of limestone in the area. I determined the samples I found to be limestone by doing some basic tests such as scratch tests, luster, and streak. Again, I'm interpreting theses results to produce my data, it's not just recording objective fact. Or let's say a physicist calculates out the trajectory of a planet, but when observing the rotation he finds that his calculations are off, at this time he has to decide if his formula is wrong or if there is an outside influence such as a nearby massive object, there's no objective way for him to know this, but he has to make a judgement call.

Many things appear to be objective on the surface but when you look more closely you can see the subtle influences of human manipulation. We currently have no way of just accessing objective truths.

Quote:
Quote:
And it's only assumed that they draw appropriate conclusions, the only people who could know that their data is off is either someone who is equally well versed on the subject matter at hand or a senior in that field, and again they would be relying upon their seniority, not objectivity, to make such a decision.


They rely upon absolutely true statistical tests. Given numerical data, statistical tools are then used to accept or reject the hypothesis. Since these are mathematical in nature, they are absolutely proven provided that the data is not biased by throwing some out. The seniority and training mostly help in determining which questions raised by the results are worth pursuing and which will likely be dead ends. As such, it adds a bit of bias into what questions are pursued, but not inherently into their answers.
How are they absolutely proven? 1+1=2? This is a logical posit, not a scientific result. You overlook that you aren't simply doing 1+1=2, within science you have to first obtain the data (here the 1s) and then attempt to come up with a fitting explanation (the 2).

The subjective part of science again is not in the concept, but the execution. How I go about obtaining that 1, and even when I decide that it's a 1, is subjective. In turn the product (2) is also subjective.
frozen_water
What makes you think that all legitimate results can be reproduced several times over, or that if something can't be reproduced it's false? Those are both rather big assumptions.


You can have a non-scientific legitimate result in that you actually did have the result you say you got, but neither you nor anyone else can get it to work again [which mostly implies an uncontrolled, unaccounted for experimental variable, such as the ambient air pressure in the mycoplasma example]. Science must be reproducible otherwise the scientific method is no applicable.

Quote:
The ability to reproduce something is great, but when it comes to more complex theories, even under the best circumstances reproducing results is difficult.


Then it is a bad theory. The standard model of particle physics, for example, details a fairly large number of very rare interactions, such as the Higgs interaction measured at the LHC, but in order for the theory to be correct, I must be able to reproduce those interactions. Otherwise, all that I may have is some unaccounted for noise that caused me to get that one time signal. I can't, of course, through out the signal, but I can't make any good statements about it either.

Quote:
Just because your result is a number does not mean it's objective. When determine how many horses have a disease, someone must first determine whether or not each individual horse has that disease, and in doing so relies upon their medical expertise.


There are well established baselines for comparison. We know what a healthy horse should look like and know what the disease does. Therefore, we can determine whether or not the horse has the disease by comparing to these baselines. The problem of either a false negative or a false positive is a result of noise in the horse population in that individuals do not necessarily fit the baseline, but that the overall population does. This is why statistical tests are done to determine whether or not you have a real signal [result] or you just have noise.

Quote:
Or let's say I'm a geologist doing some field work and I'm trying to determine the rock composition in the area. My end result shows that I found a high concentration of limestone in the area. I determined the samples I found to be limestone by doing some basic tests such as scratch tests, luster, and streak. Again, I'm interpreting theses results to produce my data, it's not just recording objective fact.


Except mass spectrometry can make an objective identification and should be used to give definitive results.

Quote:
Or let's say a physicist calculates out the trajectory of a planet, but when observing the rotation he finds that his calculations are off, at this time he has to decide if his formula is wrong or if there is an outside influence such as a nearby massive object, there's no objective way for him to know this, but he has to make a judgement call.


Well, we actually do have this problem as it is the reason for dark matter. In such a case, several possible solutions are presented and compared to the data and the one that fits the data the best is the one that describes the data the most consistently. So, for the dark matter problem, we had people say there was this dark matter stuff and others that presented many modified theories of gravity. When looking at galactic collision data, it was found that the modified gravity models all failed terribly and no other modified gravity model has been presented which explains the missing mass while still being able to reproduce more simplistic results. It is an application of the scientific method.

Quote:
Many things appear to be objective on the surface but when you look more closely you can see the subtle influences of human manipulation. We currently have no way of just accessing objective truths.


As I have said, you are using rather poor examples of this. The best you can do is that certain theories look the same with our current experimental resolution and people largely just pick the one they like and run with it. Since all the theories say we should get the results we currently have, there is not much that can be done about this until new data comes about.

Quote:
How are they absolutely proven? 1+1=2? This is a logical posit, not a scientific result. You overlook that you aren't simply doing 1+1=2, within science you have to first obtain the data (here the 1s) and then attempt to come up with a fitting explanation (the 2).


The statistical tests are just a branch of mathematics. They are not determined by scientific processes. Applying them to determine whether the results I have can be the product of my theory is thus valid as the mathematics don't care what the results nor the theory are. So, I am not using statistical tests to say that 1+1=2, in your example, but to test whether it is true 1+1=2 given noisy data within some level of confidence.

Quote:
The subjective part of science again is not in the concept, but the execution. How I go about obtaining that 1, and even when I decide that it's a 1, is subjective. In turn the product (2) is also subjective.


You have thus far given no good examples of subjective collection of data beyond examples that are the result of laziness and are known to be bad science [using only subjective tests for identification].

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Doubtful Dreamer
frozen_water
What makes you think that all legitimate results can be reproduced several times over, or that if something can't be reproduced it's false? Those are both rather big assumptions.
You can have a non-scientific legitimate result in that you actually did have the result you say you got, but neither you nor anyone else can get it to work again [which mostly implies an uncontrolled, unaccounted for experimental variable, such as the ambient air pressure in the mycoplasma example]. Science must be reproducible otherwise the scientific method is no applicable.
Unfortunately science is littered with unreproducible experiments, and even those that are reproducible are allowed some percentage of error.

Quote:
Quote:
The ability to reproduce something is great, but when it comes to more complex theories, even under the best circumstances reproducing results is difficult.
Then it is a bad theory. The standard model of particle physics, for example, details a fairly large number of very rare interactions, such as the Higgs interaction measured at the LHC, but in order for the theory to be correct, I must be able to reproduce those interactions. Otherwise, all that I may have is some unaccounted for noise that caused me to get that one time signal. I can't, of course, through out the signal, but I can't make any good statements about it either.
It doesn't have to be a bad theory, there are virtually infinite numbers of variables that come into play when testing things.

Such as cloning animals, it's very difficult to reproduce the results, but it has been done. If we were working with an equally difficult concept it's easy to see how labs, even when working with a theory that is potentially correct, would have difficulty reproducing results, and just because a result is not reproducible does not invalidate the theory, see: Duhem-Quine thesis.

Quote:
Quote:
Just because your result is a number does not mean it's objective. When determine how many horses have a disease, someone must first determine whether or not each individual horse has that disease, and in doing so relies upon their medical expertise.
There are well established baselines for comparison. We know what a healthy horse should look like and know what the disease does. Therefore, we can determine whether or not the horse has the disease by comparing to these baselines. The problem of either a false negative or a false positive is a result of noise in the horse population in that individuals do not necessarily fit the baseline, but that the overall population does. This is why statistical tests are done to determine whether or not you have a real signal [result] or you just have noise.
The problem is that you are using your own understanding of a disease to determine whether or not a horse has a disease. It's circular logic. Your understanding of that disease is based on how you think horses act when they have it, and you determine that horses have that disease based on the traits you associated with that disease (obtained from observing horses you think have it).

Round and round we go. Again, you are assuming that there is some objective understanding of the disease and when horses have it. Have you ever watched the show House? Think of that lovely board where they right down all the symptoms and determine what fits, there are several explanations for the same symptoms, that's why you trust a doctor to use his expertise and diagnose you, because there isn't just some objective list I can look at to determine illness, this is the same reason you usually get a second opinion when dealing with serious issues, because it's not an objective process.

Quote:
Quote:
Or let's say I'm a geologist doing some field work and I'm trying to determine the rock composition in the area. My end result shows that I found a high concentration of limestone in the area. I determined the samples I found to be limestone by doing some basic tests such as scratch tests, luster, and streak. Again, I'm interpreting theses results to produce my data, it's not just recording objective fact.
Except mass spectrometry can make an objective identification and should be used to give definitive results.
There is no feasible way to sample the entire area and have it broken down, there is a reason geologists use those tests. Assuming I brought in a sample and analyzed it with a mass spectrometer, how do I know if those results are indicative of the entire area?

Quote:
Quote:
Or let's say a physicist calculates out the trajectory of a planet, but when observing the rotation he finds that his calculations are off, at this time he has to decide if his formula is wrong or if there is an outside influence such as a nearby massive object, there's no objective way for him to know this, but he has to make a judgement call.
Well, we actually do have this problem as it is the reason for dark matter. In such a case, several possible solutions are presented and compared to the data and the one that fits the data the best is the one that describes the data the most consistently. So, for the dark matter problem, we had people say there was this dark matter stuff and others that presented many modified theories of gravity. When looking at galactic collision data, it was found that the modified gravity models all failed terribly and no other modified gravity model has been presented which explains the missing mass while still being able to reproduce more simplistic results. It is an application of the scientific method.
And dark matter is a theory not a fact. It's has not been objectively proven that dark matter exists, it's just a belief, which is my point. Science doesn't product facts, it produces theories. Science is not objective truths, but subjective attempts to understand the universe.

Quote:
Quote:
Many things appear to be objective on the surface but when you look more closely you can see the subtle influences of human manipulation. We currently have no way of just accessing objective truths.
As I have said, you are using rather poor examples of this. The best you can do is that certain theories look the same with our current experimental resolution and people largely just pick the one they like and run with it. Since all the theories say we should get the results we currently have, there is not much that can be done about this until new data comes about.
It's not that there isn't much that can be done about it, there is nothing that can be done about it. Regardless of the theory, it is still just a theory, not fact, not an objective truth, but a guess. Belief in any scientific theory requires faith.

Quote:
Quote:
How are they absolutely proven? 1+1=2? This is a logical posit, not a scientific result. You overlook that you aren't simply doing 1+1=2, within science you have to first obtain the data (here the 1s) and then attempt to come up with a fitting explanation (the 2).
The statistical tests are just a branch of mathematics. They are not determined by scientific processes. Applying them to determine whether the results I have can be the product of my theory is thus valid as the mathematics don't care what the results nor the theory are. So, I am not using statistical tests to say that 1+1=2, in your example, but to test whether it is true 1+1=2 given noisy data within some level of confidence.
And what does that provide you with? A general idea of what may or may not be correlated data. Even if you can show that horses are sick, you can't prove why they are sick. It's probably that polluted water, but it could also be a new disease, or a side-effect of a new feed. 1+1=2 is not subjective, but when you try to give it context it loses that objectivity.

Quote:
Quote:
The subjective part of science again is not in the concept, but the execution. How I go about obtaining that 1, and even when I decide that it's a 1, is subjective. In turn the product (2) is also subjective.
You have thus far given no good examples of subjective collection of data beyond examples that are the result of laziness and are known to be bad science [using only subjective tests for identification].
Bad science? Rock physical property tests are bad science? What constitutes "good science" exactly?
frozen_water
Unfortunately science is littered with unreproducible experiments, and even those that are reproducible are allowed some percentage of error.


What experiments are unreproducible? As for the error comment, that would be the noise in the signal.

Quote:
It doesn't have to be a bad theory, there are virtually infinite numbers of variables that come into play when testing things.


If the theory has one successful event, there is no method of determining that the one event is anything other than noise. That is why it would be a bad theory. There has been one "successful" measurement of a magnetic monopole, but since this is one measurement in many, many such experiments, there is no statistical tools to bring to bear to determine whether or not that was a real signal or just a random noise event. Thus, it is still fairly safe to say that there are no magnetic monopoles and that the standard model does contain all particles that couple to something other than gravity.

Quote:
Such as cloning animals, it's very difficult to reproduce the results, but it has been done. If we were working with an equally difficult concept it's easy to see how labs, even when working with a theory that is potentially correct, would have difficulty reproducing results, and just because a result is not reproducible does not invalidate the theory, see: Duhem-Quine thesis.


Yes, cloning is difficult, but it is repeated and has been repeated often since the firs successful cloning following more or less the same procedure. Thus, the procedure is validated.

Using the mycoplasma example again, one success does not validate the procedure if all other attempts at following the procedure fail. This means that something not accounted for in the procedure was responsible for the success and not the procedure. Thus, the procedure is incorrect in its current state.

Quote:
The problem is that you are using your own understanding of a disease to determine whether or not a horse has a disease. It's circular logic. Your understanding of that disease is based on how you think horses act when they have it, and you determine that horses have that disease based on the traits you associated with that disease (obtained from observing horses you think have it).


If you have a healthy baseline and a diseased population and can show that the diseased population is off the healthy baseline in a consistent, statistically significant manner, then you have categorized your disease.

Quote:
Round and round we go. Again, you are assuming that there is some objective understanding of the disease and when horses have it. Have you ever watched the show House? Think of that lovely board where they right down all the symptoms and determine what fits, there are several explanations for the same symptoms, that's why you trust a doctor to use his expertise and diagnose you, because there isn't just some objective list I can look at to determine illness, this is the same reason you usually get a second opinion when dealing with serious issues, because it's not an objective process.


Medicine is a poor example as there is rarely a healthy baseline available for the individual, only for the entire population. Since individuals can drift quite far from the healthy baseline without having any medical condition or even being predisposed to any conditions, a study of the individual is a poor comparison to scientific processes conducted on a population. As such, a great deal of subjective opinion does enter when medicine is practiced on the individual, but this is largely a problem of poorly established "parameters" for the individual which require judgement calls [while medical practice makes use of scientific processes, it itself often strongly deviates from use of the scientific method due to personal bias which has been demonstrated several times through surveys of practices used by practicing doctors].

Quote:
There is no feasible way to sample the entire area and have it broken down, there is a reason geologists use those tests. Assuming I brought in a sample and analyzed it with a mass spectrometer, how do I know if those results are indicative of the entire area?


You would have to use more than one sample. Surely you don't do the test you described on only one sample to categorize the entire area? One sample is insignificant and carries no statistical weight and thus cannot give any information [other than what that sample was made of, but that is useless when you are talking about an area]. Ideally, you should randomly select a number of samples dependent upon the level of confidence you wish and the size of the area under study and then do mass spec on all of them.

Quote:
And dark matter is a theory not a fact. It's has not been objectively proven that dark matter exists, it's just a belief, which is my point. Science doesn't product facts, it produces theories. Science is not objective truths, but subjective attempts to understand the universe.


Science is a model, as I have already stated, and dark matter has beaten every other currently available model to a degree that would put it on par with the modern theory of gravity. Also, please do not use "theory" in a colloquial way when discussing scientific concepts as it drastically undermines the testing involved.

Quote:
It's not that there isn't much that can be done about it, there is nothing that can be done about it.


Increase the resolution of the experiment. If the universe has a non-infinitesimal resolution, then there will come a point where one theory is better than all competitors. If not, then such a point will never come and which theory is better is subjective opinion, I agree. However, theories which carry weight generally have the same causative mechanism at the current experimental scale and hence why they are indecipherable.

Quote:
Regardless of the theory, it is still just a theory, not fact, not an objective truth, but a guess. Belief in any scientific theory requires faith.


It is far more than a guess due to rigorous and repeated testing. If it were a truly random guess, it would have an infinitesimal chance of explaining any phenomena in a repeatable, non-specialized manner.

Quote:
And what does that provide you with? A general idea of what may or may not be correlated data. Even if you can show that horses are sick, you can't prove why they are sick. It's probably that polluted water, but it could also be a new disease, or a side-effect of a new feed. 1+1=2 is not subjective, but when you try to give it context it loses that objectivity.


If it were a new disease characterized identically to the old disease and thus operating under the same causative mechanisms, is it really a new disease? If it presents different deviations in the population than the disease under consideration causes, then the disease you suspected in the first place is obviously the wrong one.

But, let's say that some of the horses were misdiagnosed to have the disease under suspicion due to individual variance. Since individual variance in the population is characterized, this would be the noise in the signal the statistical tests are meant to weed out. It is known the population is imperfect and that the individuals vary and that misdiagnosis occurs.

As for what the statistical tests give you, they give you a tool of testing theories and the best theory wins. If all the theories fail the tests, then there is no good theory at the present time and it is an open question.

Quote:
Bad science? Rock physical property tests are bad science? What constitutes "good science" exactly?


If the physical property tests are as open to subjective influence as you suggest, then they are distinctly bad science as they give unreliable results. If you are over exaggerating their failure rate, then the subjective influences should have a statistically insignificant impact.

Aged Gaian

11,400 Points
  • 50 Wins 150
  • Crack Shot 50
  • Forum Regular 100
Doubtful Dreamer
frozen_water
Unfortunately science is littered with unreproducible experiments, and even those that are reproducible are allowed some percentage of error.


What experiments are unreproducible? As for the error comment, that would be the noise in the signal.
I don't have an exact figure but even just going off of published articles in Scientific journals the numbers are extremely low.

EXAMPLES: ~10% ~21-35%

If you read through that second link it even states that industry professionals see it as an unspoken rule to just assume about half of published research is not reproducible.

Quote:
Quote:
It doesn't have to be a bad theory, there are virtually infinite numbers of variables that come into play when testing things.


If the theory has one successful event, there is no method of determining that the one event is anything other than noise. That is why it would be a bad theory. There has been one "successful" measurement of a magnetic monopole, but since this is one measurement in many, many such experiments, there is no statistical tools to bring to bear to determine whether or not that was a real signal or just a random noise event. Thus, it is still fairly safe to say that there are no magnetic monopoles and that the standard model does contain all particles that couple to something other than gravity.

Quote:
Such as cloning animals, it's very difficult to reproduce the results, but it has been done. If we were working with an equally difficult concept it's easy to see how labs, even when working with a theory that is potentially correct, would have difficulty reproducing results, and just because a result is not reproducible does not invalidate the theory, see: Duhem-Quine thesis.


Yes, cloning is difficult, but it is repeated and has been repeated often since the firs successful cloning following more or less the same procedure. Thus, the procedure is validated.

Using the mycoplasma example again, one success does not validate the procedure if all other attempts at following the procedure fail. This means that something not accounted for in the procedure was responsible for the success and not the procedure. Thus, the procedure is incorrect in its current state.
The problem here is that we know accurate tests can produce results showing a theory incorrect when it is, and correct when it's not. There is a margin of error that most attribute to noise, but it could be significant data indicated something else, we don't know which, but we make a determination based on probability-not objective truth.

We assume a theory is true or reveals some objective truth just because we can continue to produce similar results repeatedly, but it's still an assumption. Just because my test shows something is likely to happen does not prove that it will.

Quote:
Quote:
The problem is that you are using your own understanding of a disease to determine whether or not a horse has a disease. It's circular logic. Your understanding of that disease is based on how you think horses act when they have it, and you determine that horses have that disease based on the traits you associated with that disease (obtained from observing horses you think have it).


If you have a healthy baseline and a diseased population and can show that the diseased population is off the healthy baseline in a consistent, statistically significant manner, then you have categorized your disease.

Quote:
Round and round we go. Again, you are assuming that there is some objective understanding of the disease and when horses have it. Have you ever watched the show House? Think of that lovely board where they right down all the symptoms and determine what fits, there are several explanations for the same symptoms, that's why you trust a doctor to use his expertise and diagnose you, because there isn't just some objective list I can look at to determine illness, this is the same reason you usually get a second opinion when dealing with serious issues, because it's not an objective process.


Medicine is a poor example as there is rarely a healthy baseline available for the individual, only for the entire population. Since individuals can drift quite far from the healthy baseline without having any medical condition or even being predisposed to any conditions, a study of the individual is a poor comparison to scientific processes conducted on a population. As such, a great deal of subjective opinion does enter when medicine is practiced on the individual, but this is largely a problem of poorly established "parameters" for the individual which require judgement calls [while medical practice makes use of scientific processes, it itself often strongly deviates from use of the scientific method due to personal bias which has been demonstrated several times through surveys of practices used by practicing doctors]
You have categorized a set of symptoms which you believe to be attributed to a disease. Each horse could have a separate ailment producing the appearance of a single disease. Is this likely? No. Is it possible? Yes.

The research and treatment of illness is scientific in nature, so it's not a poor example unless you're trying to claim that the entire medical field is unscientific, which would be an odd position seeing as they receive large sums of money for their scientific research.

Quote:
Quote:
There is no feasible way to sample the entire area and have it broken down, there is a reason geologists use those tests. Assuming I brought in a sample and analyzed it with a mass spectrometer, how do I know if those results are indicative of the entire area?


You would have to use more than one sample. Surely you don't do the test you described on only one sample to categorize the entire area? One sample is insignificant and carries no statistical weight and thus cannot give any information [other than what that sample was made of, but that is useless when you are talking about an area]. Ideally, you should randomly select a number of samples dependent upon the level of confidence you wish and the size of the area under study and then do mass spec on all of them.
And how do I know my samples are indicative of the entire area? My results on a rock only truly show me information about that rock, trying to apply what I learn from one rock or twenty is still just guessing. I logical guess I'll grant you, but a guess none the less, it's not proven.

Quote:
Quote:
And dark matter is a theory not a fact. It's has not been objectively proven that dark matter exists, it's just a belief, which is my point. Science doesn't product facts, it produces theories. Science is not objective truths, but subjective attempts to understand the universe.


Science is a model, as I have already stated, and dark matter has beaten every other currently available model to a degree that would put it on par with the modern theory of gravity. Also, please do not use "theory" in a colloquial way when discussing scientific concepts as it drastically undermines the testing involved.

Quote:
It's not that there isn't much that can be done about it, there is nothing that can be done about it.


Quote:
Increase the resolution of the experiment. If the universe has a non-infinitesimal resolution, then there will come a point where one theory is better than all competitors. If not, then such a point will never come and which theory is better is subjective opinion, I agree. However, theories which carry weight generally have the same causative mechanism at the current experimental scale and hence why they are indecipherable.
Best =/= True

That means it's more likely, that does not objectively prove it's true.

I used theory to represent what it is, it's an educated guess, but still a guess and nothing more. You want to make it into something greater by showing the tests and things that have supported it, but that does not take away from the fact that it can't be proven, it is just a theory and that's all it can be.

Quote:
Quote:
Regardless of the theory, it is still just a theory, not fact, not an objective truth, but a guess. Belief in any scientific theory requires faith.


It is far more than a guess due to rigorous and repeated testing. If it were a truly random guess, it would have an infinitesimal chance of explaining any phenomena in a repeatable, non-specialized manner.
An educated guess is still a guess.

Quote:
Quote:
And what does that provide you with? A general idea of what may or may not be correlated data. Even if you can show that horses are sick, you can't prove why they are sick. It's probably that polluted water, but it could also be a new disease, or a side-effect of a new feed. 1+1=2 is not subjective, but when you try to give it context it loses that objectivity.


If it were a new disease characterized identically to the old disease and thus operating under the same causative mechanisms, is it really a new disease? If it presents different deviations in the population than the disease under consideration causes, then the disease you suspected in the first place is obviously the wrong one.

But, let's say that some of the horses were misdiagnosed to have the disease under suspicion due to individual variance. Since individual variance in the population is characterized, this would be the noise in the signal the statistical tests are meant to weed out. It is known the population is imperfect and that the individuals vary and that misdiagnosis occurs.

As for what the statistical tests give you, they give you a tool of testing theories and the best theory wins. If all the theories fail the tests, then there is no good theory at the present time and it is an open question.
And what if they are weeding out some significant data because it's determined irrelevant to what they are searching for? How do they determine what is noise and what is significant?

What sort of test is there to determine truth? I'll admit to having little knowledge on the range of tests, but I'm not aware of any statistical test that can determine truth or lack there of, only tests that show how well theories match a given pattern.

Quote:
Quote:
Bad science? Rock physical property tests are bad science? What constitutes "good science" exactly?
If the physical property tests are as open to subjective influence as you suggest, then they are distinctly bad science as they give unreliable results. If you are over exaggerating their failure rate, then the subjective influences should have a statistically insignificant impact.
How can I be exaggerating the failure rate? I never said anything about the test failing. It's only an example of subjectivity as a regularly practiced test in science.

Side note: I apologize for the late response, I had some obligations to attend to that took priority over posting on Gaia.
Science is subjective that is closest to objective than Religion and Government.

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum