Originally Posted by: Jean Giraud By the way, your Reynolds formula, being physically incorrect, also doesn't compensate for using
centipoises.
Originally Posted by: Jean Giraud So you are reasoning, that engineers should:
- substitute their understanding of a physics of what they work with, with some authorized magic formulas,
- blindly believe that those formulas are correct?
Let me link here
an example article that discusses one Russian standard where gas hydraulic formulas are given. Unfortunately, it's in Russian. It discusses the incorrectness of that formulas, and the causes that had lead to that incorrectness. Among them are using "magic coefficients" that stem from using values in specific nonstandard units (like your cp, or t/hr), "masking" real physic laws with "convenient" formulas (they are "convenient" only when you need to use paper for calculations), and arbitrary rounding of those coefficients. Contemporary software is capable of taking your favorite units, and do
robust translation to standard units so that you only have to know physics, without having to remember useless coefficients.
Having those magic formulas in standards is simply remnants of the era when you had not softwares that allow you to express physical laws in their natural ways, when you had to use such "convenience" formulas just to be able to do maths in a timely fashion. And they
do have errors in them, because of that. Reasoning like yours would just prevent
fixing those standards to contain pure physics instead of that crap.
I know many
engineers that don't understand the underlying physics and just use those formulas as "black boxes" to get answers. That leads to improper use of the formulas, just because they cannot see the range of applicatibility of them! And it's difficult to catch errors in their complex calculations, because of those cryptic formulas.
Originally Posted by: Jean Giraud Actually, your point here eludes from my understanding. From what I see, it looks exactly analogous to this reasoning:
"Let's take some common words: "Number of minutes". Now, let's "multiply" (mix) them by some other word: "min". Let's see result: "N
mu
im
nb
me
ir
n mo
if
n mm
ii
nn
mu
it
ne
ms
i". Utter garbage! Never use units!"
Using units properly in math software isn't taking a number that you made unitless explicitly, and trying to see its absent units.
Originally Posted by: Jean Giraud So what? You have a job that requires
you to do dimensionless maths. Does that mean that each must do same regardless of their circumstances?
Originally Posted by: Jean Giraud Alex, what's good about units: they are democratic. Very useful up to college level.
m*m*m*m = m^3 and similar stuff. Entering avdvanced maths, symbolic engines reply
in red [Mathcad 11] or a full page in blue [Mathematica 4.0].
Prof. V. F. Ochkov of Moscow Power Engineering Institute, an expert in Mathcad and other math packages, one of the authors of IAPWS standards and their software implementation, has a number of articles
in Russian and
in English, where he goes in depths discussing units in maths. I could't do better.
I just want to emphasize the difference of "pure" math and applied computations.
Sometimes, people do pure math. And yes,
it is unitless.
But math is just the
language of science, so it's science that
uses math to express its relations. The science fills maths with meaning, and that meaning may be expressed with units. Proper use of units may help understanding the relations, or see new relations, or discover shortcomings of a theory, or allow catching logical errors in input data.
Having a tool that allows you to quickly catch errors is golden in any work. However pedantic you are,
you aren't bullet-proof from errors. Even though you may "dismiss" some errors as "irrelevant", they are inevitable. And being able to catch x% of them almost
for no price (modulo some short training) is precious! Yes, you having very trained brain, may make less errors, but your price is more attention to details that could be spend better doing useful work, if you allow software do some muscle job for you. And if some software gives you errors when you use units, this may indicate that either you use improper math (go fix it!), or you're dealing with some empirical formulas, or maybe some problem in software
that should be fixed, instead of calling this proper software behaviour.
Let's look at an analogous case. In recent decades, software programming languages have evolved immensely. And they started without type check. When the type checking appeared, it quickly became great tool allowing to catch errors easier. It has its rough edges, but it continue to evolve and it gets better each iteration. And they don't argue that "type checking is for kiddies"! Because keeping 1 hr a day from dumb work allows me to do more work in that hour, instead of being proud of being "true hacker" accomplishing less.
Units aren't silver bullet. They must be used wisely; you need some training to do that. Their systems evolve, aren't something established for ages. There are times when you need some extra effort (esp. when dealing with empirical formulas). There are tasks that are better done without them. There are people that dislike them. But it's
plain wrong to blindly call them useless, and ask everyone to abandon this tool!
Edited by user 09 April 2016 05:02:10(UTC)
| Reason: Not specified