I don’t think there’s anything better or worse with using fractions versus decimal. Numbers are numbers… but your example just shows you have a •preference• for one method over the other. Not that either is subjectively better.
Your last example is literally exactly the same precision. Did you struggle with “significant figures” in school… lots of people raised in American schools do.
I don’t quite think you got his point since they are not literally the same. 32/64 implies an accuracy of 1/64th or .01563. 0.5 implies an accuracy of 0.05 or half of the increment of measurement (0.1 in this case).
I don’t agree however that fractions are more accurate since it is arbitrary. For instance 0.5000 is much more accurate than 32/64 or 1/64.
It’s not that precision can’t be arbitrarily recorded higher in fraction, it’s that precision can’t be recorded precisely. Decimal is essentially fractional that’s written differently and ignoring every fraction that isn’t a power of 10.
How can a measurement 3/4 that’s precise to 1/4 unit be recorded in decimal using significant figures? The most-correct answer would be 1. “0.8” or “0.75” suggest a precision of 1/10th and 1/100th, respectively, and sig figs are all about eliminating spurious precision.
If you have 2 measurement devices, and one is 5 times more precise than the other, decimal doesn’t show it because it can only increase precision by powers of 10.
In the case of 1/64th above, if you just divide it out it shows a false precision of 1/100,000.
Significant figures is what I’m talking about. The entire point of them is to prevent spurious precision. How do you record a measurement of 3/4 precise to 1/4 using sig figs?
You can’t do .75 because that’s implying a precision 25 times greater than the measurement.
You can’t do .8 because that’s implying a precision that’s still 2.5 times more precise than the measurement.
I don’t think there’s anything better or worse with using fractions versus decimal. Numbers are numbers… but your example just shows you have a •preference• for one method over the other. Not that either is subjectively better.
Your last example is literally exactly the same precision. Did you struggle with “significant figures” in school… lots of people raised in American schools do.
I don’t quite think you got his point since they are not literally the same. 32/64 implies an accuracy of 1/64th or .01563. 0.5 implies an accuracy of 0.05 or half of the increment of measurement (0.1 in this case).
I don’t agree however that fractions are more accurate since it is arbitrary. For instance 0.5000 is much more accurate than 32/64 or 1/64.
It’s not that precision can’t be arbitrarily recorded higher in fraction, it’s that precision can’t be recorded precisely. Decimal is essentially fractional that’s written differently and ignoring every fraction that isn’t a power of 10.
How can a measurement 3/4 that’s precise to 1/4 unit be recorded in decimal using significant figures? The most-correct answer would be 1. “0.8” or “0.75” suggest a precision of 1/10th and 1/100th, respectively, and sig figs are all about eliminating spurious precision.
If you have 2 measurement devices, and one is 5 times more precise than the other, decimal doesn’t show it because it can only increase precision by powers of 10.
In the case of 1/64th above, if you just divide it out it shows a false precision of 1/100,000.
0.75 ± .25 is that what you mean? If so here you go, that’s how any statician would do.
That’s not a number - that’s a sentence that takes up 3 times as many characters as 3/8.
3/8 is more efficient.
Sure dude
Now do 0.75 ± 0.05 with a fraction
15/20
Wtf
Significant figures is what I’m talking about. The entire point of them is to prevent spurious precision. How do you record a measurement of 3/4 precise to 1/4 using sig figs?
You can’t do .75 because that’s implying a precision 25 times greater than the measurement.
You can’t do .8 because that’s implying a precision that’s still 2.5 times more precise than the measurement.
So it’s 1.