## Truth, Lie, Highly-Nonlinear Function, Entropy and Isaac Asimov

Today I saw a quote from Isaac Asimov,

The closer to the truth, the better the lie, and the truth itself, when it can be used, is the best lie.

— Isaac Asimov

It inspired me to think about how to model the quote mathematically. However, I got a bit distracted from the original idea to have another point of view yet similar to the original one.

First let’s investigate what the original quote gives us. According to my interpretation it means that the closer lie to the truth is the better lie. Consequently, in some occasions, the truth itself can be considered the best lie. The lie in the disguise of the truth is the most difficult lie to catch. So what makes the truth different from the lie if they are “actually the same point” in an abstract space?

Observation –> this can be a main guy who creates an impression such that A and B are “actually the same point”. Say we have a function f(X,Z) such that point A: f(X=x,Z=z1) ≠ point B: f(X=x,Z=z2). However, if we eliminate the dimension Z from the space point A and B are actually the same point. It implies that if we can observe only X but not Z, we may think A and B are the same, but that is not true because in the reality there is Z which we cannot observe. [show picture here]

Basically, Isaac Asimov raised a very interesting point of view in the inference that if the lie being closer to the truth is called a better lie, then the lie lies at the truth ought to be called the best lie! Sort of paradoxical, I like the perspective a lot.

In comparison, I would like to propose my point of view about the function embracing “truth” or “lie” is extremely nonlinear but exists. However, with new definition of measurement the function can be very “linear”

What we called better lie can be determined by function y(lie) = 1/x; say 0 is the truth. When x approaches 0 from right hand side, y is bigger –> better lie. However, at the exact point 0, y is not define which means the lie is not defined here. Some may argue that at x = 0, y = ∞ which implies the best lie ever. For me, the ∞ only exists when the true 0 exist. If we are really at the true 0, then the function y(x=∞) is not defined there. In other words, y(x=∞) is definitely not a lie any longer in this space since it is not defined here. Guess what, I think that is called the truth in the other dual space in stead. So I think “… and the truth itself, when it can be used, is the best lie” is actually not the true 0 but pseudo 0; i.e. extremely close to 0.

How to make the truth and the lie defined in the same space or dimension? One way I can model it mathematically is to use this equation y = x*log(1/x). The equation gives the impression to me such that y is the outcome of a battle between x and log(1/x); one pulls up and the other pulls down with some interesting strategy depending on the battle field x. The most interesting point for me to this equation is at the point x = 0. log(1/x)=∞ and x = 0, who is gonna win here? Seems like x is the winner here since lim y at x–>0 is 0. That means at x = 0 is exactly the truth and away from 0 is the lie.

You know what, the term y = x*log(1/x) is called “entropy” invented by Claude Shannon. Of course, he is one of my greatest idol. Yes, the truth and the lie have something to do with the entropy! What is that? ^_^

Hello,

to your last remark – have you done any more thinking on the difference between information and disinformation regarding entropy. Lying is one peculiar concept – what does it introduce in the sphere of entropy thinking? Lying is so different from a message with a low information content.

br

Mats