Disclaimer: I don't mean to pick on people or things. For the record I predicted the demise of start-ups that were bought for many, many times their initial investments. here I describe my 3 criterions to judge if I will depend on some piece of software to build upon.
It seems "obvious" to me that using Jai (the programming language) or CLAP (an open-source audio plugin format) are rather risky choices (3 out of 3).
On the contrary, using D language is seen as a big risk in my circles, even though by my criterions it's not (0 out of 3).
A small business like mine shouldn't take unsubstantiated bets. You cannot build on sand. I'll try to dissecate my reasoning for how I consider something to be sand or concrete.
But what if I was just reasoning out my desires and giving them a varnish of rationality? This is a possibility, so let's analyze it.
Things that last need almost boundless funding compared to things that are just made as a one-off.
This kind of longevity takes a lot of dedication and money.
Things that last for a while are typically made by:
It's important to know how likely it is that the people building the artifact you would depend on will be forced to stop, due to lack of money.
Example:
- SOUL was made by a money-loosing start-up. The start-up (predictably) died and SOUL was buried, since not making money.
- Thekla has funding because of The Witness, but the astute observer will see that both this last game and their last last game almost bankrupted the company. If they also go near-bankrupt for this next game, how will they find funding to develop Jai on top of it? On top of it, games are one of the hardest market to be in.
- CLAP is mainly made by a contractor. Whatever the intention, if the money dries up, there might not be any reason for a contractor to keep working. To put it politely, audio software companies are more interested by piling up money than giving it to open-source. If they loved open source, they would have funded and supported LV2 instead. Meanwhile, LV2 survived for years and years while being maintained.
Whatever is a moneymaker will probably get maintained and blossom, even in complete absence of passion.
Using Flutter would make me pause, because Google doesn't really NEED it to operate. For what it's worth, it could invent an even superior UI library and abandon this one, leaving you with debt. Google has unlimited funding, but Flutter will be dumped if it's not strategic to do.
At a more fundamental level, what you depend on must be an integral part of its creator plan, strategy, and perhaps a bit of personal narrative.
Example:
- Thekla doesn't NEED Jai: they do write a Sokoban game in it, but its success stems from The Witness (a C++ game), not from a Jai game. What will happen if the game will fail? No incentive to develop Jai at all. That completely changes if the game is a financial success!
- u-he and Bitwig do not NEED CLAP: they funded it, but it likely pro-bono and their success builds upon unrelated other things (successful audio software). The problem is that there is a need for such a plugin format, but LV2 already exists that solve the same problem.
It is important to correct for hype.
Typically, people ask others "why not use X?" for a few years during a "window of hype". It can even get tiring. For example D people being pressured to rewrite in Rust, as if the language was 100% independent of the thing being made.
What the heck? I mean, this technique is exactly the same as Instagram pyramidal NFT schemes.
I've found that hype is a great indicator of something that is likely overrated and unsustainable. Things that are good (like PHP or FLStudio) instead quietly powers the world without too much noise. The presence of noise instead indicate they are not going to be mainstay, or they are goig to only appeal to higher-class people.
Example:
- SOUL language was presented like "the future", and discontinued the next year. Compiler was kept closed-source, probably in anticipation of that event.
- Jai has been 9 years in the making. It is being kept closed-source and in private beta, either to generate excitement, or to prepare for keeping it private/abandoned.
- CLAP is mentionned to me, but with less energy in 2023 than 2022, in the mean time LV2 is also making progress. The current hype wouldn't have happened without a marketing effort, something that LV2 never had.
- FLStudio is by a large amount the most used DAW in the world, but if measured by Internet noise you would think it is Ableton Live.
- Odin language doesn't have this problem.
- Resolve is an amazing software that has well-deserved hype, one that you can check for yourself.
What kind of programming language (or programming language feature) could take advantage of our newfound ATM Theory of Software?
deprecated
A feature that seems related to artifact ATM cost is the deprecated
keyword in Java and D.
deprecated
is a statement that said: "This has (probably) negative ATM".
Another way to express this is a comment that says its discontent with something, typically:
// this is bad
// to be removed
// inadequate for xyz reasons
But why these deprecations accumulate in real programs?
TODO
commentsPrograms are littered with interesting debt statements in the form of TODO comments.
TODO
is a statement that says: "This change would (probably) have positive ATM". There is something "to do", but why not now? Is that a certainty? The comment usually tells why waiting.
If part of the cost is a statistics uncertainty (typically: "extend this only if artifact end up being used"), this TODO reflects that bayesian cost in its comment content usually.
deprecated
and TODO
act as roughly reverse of each other, and express a statement about profitability of some software artifact: "good" or "bad".
Both could be "conditional" and explain the key bayesian decision behind them. Why not being able to express "this needs to be removed, if xyz".
We can see intuitively that "to do" and "deprecated" are actually the same, in the same way that software asset and debt are the same. Each provide an ATM statement about an artifact.
Moreover, the time that these annotations accumulate is also significant and tells a statement about why the ATM to do something is not that positive. For example, it may be too costly to remove a widely used abstraction.
Tracking TODO and deprecated over time could maybe hint at areas where value is concentrated.
The experimental proposal is such:
A first-class TODO system that replaces both deprecated keyword and TODO comments, and is tracked over time to estimate the ATM of a particular artifact (class, function, file...)
The compiler would track use of sub-artifact debt estimates, and would possibly provide an estimated ATM cost of the maintenance of that complete program.
In other words: if you depend on an abstraction that is rated negatively in ATM, then your ATM cost depreciate (you will need to remove the dependency eventually), and conversely for depending on an abstraction with many positive TODO. A bit like cyclomatic complexity in a way.
Some classes are rather perfect and contain no TODO or deprecation. We postulate that their ATM is smaller in absolute value than the surrounding code. If it weren't, there would be change friction around it. The programmer probably anticipated this wouldn't bring value to make it good, nor to remove it.
In a surprising sequence of events, the month of mid-July to mid-August 2019 was a lot more eventful than usual.
All that happened while I didn't really care since I was in the very important process of releasing a product.
Being adventurous is not exactly a goal to me ; I consider it more like a fault. I'm thankful such a high-intensity month did not ever repeat.
I want to reason on qualitative "traits" that software artifacts may have. Like personality studies, if we can enumerate those possible traits and find the correlation, we will maybe stumble upon a "Big Five" of software.
Here is the method:
Here is some traits I can think ok:
H. Private (H+) vs Public (H-) Is information hiding important in this artifact? Are namespace clashes likely?
D. Declarative (D+) vs Imperative (D-) Does the artifact describes things to do, or do it directly?
U. Pure (U+) vs Mutable (U-) _Does the artifact promote lack of state?
Q. High-quality (Q+) vs Low-quality (Q-) Does the artifact NEED to be "high quality"?
S. Large (S+) vs Small (S-) Does the artifact NEED to be small to justify existence?
B. Buggy (B+) vs Correct (B-) Does NEED to be correct to justify existence?
F. Fast (F+) vs Slow (F-) Does the artifact NEED to be fast to justify existence?
T. Top-down (T+) vs Bottom-up (T-) Was the software made rather with topdown design, or with bottom-up design?
R. Reused (R+) vs Throwable (R-) Does the artifact NEED to be reusable to justify existence?
C. Creative (C+) vs Conventional (C-) Do you know what the result will be like at creation?
U. User-rich (U+) vs User-poor (U-) Does the artifact offer many modalities?
P. Professional (P+) vs Consumer (P-) Is the artifact intended for consumers?
M. Maintenance (M+) vs No-maintenance (M-) Does the artifact need much maintenance?
N. Big runtime (N+) vs Small runtime (N-) Does the artifact relies on another big artifact to exist? (relative size)
E. Errors matter (E+) vs Errors can be ignored (E-) Does the artefact NEED to manage errors in a meaningful way?
Obviously this choosen set isn't perfect at all.
Let's rate all of the artifacts alongside those axis, and see what happens.
H- D+ U+ Q- S- B+ F- T- R- C+ U- P+ M= N+ E-
H+ D= U= Q+ S+ B+ F+ T+ R- C- U+ P- M+ N- E+
H- D= U- Q+ S+ B- F+ T= R- C+ U+ P- M= N+ E-
H+ D= U= Q= S+ B- F+ T= R- C+ U- P+ M+ N= E+
H- D- U- Q+ S- B- F+ T- R+ C+ U- P+ M- N- E+
H+ D- U+ Q+ S+ B- F+ T- R+ C- U- P+ M- N- E+
H- D- U- Q= S- B+ F= T+ R= C= U- P+ M- N- E-
H- D= U- Q- S+ B- F- T- R- C- U- P+ M- N= E-
H- D+ U+ Q- S+ B+ F- T+ R- C+ U+ P- M= N+ E-
H- D= U- Q+ S+ B- F+ T- R- C+ U+ P- M= N+ E-
H- D- U+ Q- S- B+ F- T- R- C+ U- P+ M- N= E-
H- D- U- Q- S+ B+ F- T+ R= C- U+ P+ M- N- E-
H+ D- U+ Q+ S- B- F- T+ R+ C+ U- P+ M- N- E+
H- D= U- Q= S+ B- F- T- R- C+ U- P+ M+ N- E+
Here are the strongest autocorrelations analysis in our small dataset:
"Consumer" software is associated with "User-rich". Which could also means non-consumer software can be poor in this regards and have less modalities.
Software where error reporting matters also are software where information-hiding matter. Perhaps an explicitely "throwable" traits would highlight this lack of correctness, and under what conditions.
Software comprised of "declarative" style is also associated with a bigger runtime.
Declarative code is less meant to be reused. Which makes sense if we think of HTML, CSS, etc.
Software that needs to be fast often also needs to be of a high-quality, or vice-versa. Perhaps would need to clarify what high-quality means.
Well, that's not bad for an informal made-up dataset. If done with more rigor, one should perhaps discover useful phenomenon and even reduce to core dimensional traits of software artifacts.
DICLAIMER: I realize I'm sounding like a mad man. This post is not for well flushed out ideas, rather things that I feel should happen eventually. Please bear with me.
These days it's all about AI, and Web3, and crypto, and other things I can only classify as distractions. This is the neverending noise of our times.
I'm afraid that real progress may not look anything like that. It may well hide in the little things. Actually my belief is that we could live in a futuristic world if only we paid attention.
Here is what I think humanity should do.
Use meta-research systematically to detect promising, formerly ignored papers and techniques.
Meta-reseachers have now produced lots of recommendations that we just need to apply to discover more things, quicker. That is, if you want to continue scientific research.
The police has its FBI, and it is located above in the hierarchy. Why shouldn't Research have a feedback loop too?
The fast development of LLM and animal studies could perhaps enable us to not only communicate with other species, but also ask them for help in light of the ecological catastrophy.
If we can convince trees, or their fungus complexes, to pump more CO2 from the air, why not try?
In this hidden blog, I have laid the foundation of explaining software through a self-discovered tautological theory of software. It also applies to Intellectual Property in general (including remixes) and thought systems. Though not terribly useful, I think we could bridge economy and software development in a unified framework. Because I suspect there is nothing mysterious about software, just white collar conceptual goo.
Trauma processing could perhaps be routine instead of a mysterious practice. Besides relief, one side-product of trauma relief is: the training of our capacity to "change our mind", probably through some kind of neural weights remapping.
This capacity need not be linked to traumas at all; it is separate. We could use more of that to change our mind when needed. I believe anyone can be trained to do it.
This has huge implications for quality of life, and self-determination. Once void of parasite thought, humans can feel "neutral" and suitably empty, ready for proper judgement.
I'm now pretty sure the right mental manipulation can solve the narcissist personnality disorder. For some reason there is no will to uncover this secret.
I'm also convinced the attachment style to be changeable through a similar technique, working on primary attachment objects.
This could be a huge help in reducing harassement and the general inadequacy of narcissism, and could help people build rapport.
I suspect there is such a range of far-reaching, violent but impressive mental manipulations to discover.
We may research more the small-time Volitional personnality changes and find the factors that can lead someone to become more or less extraverted for example, more or less open to experience, etc.
We need to investigate the link between brain and immune system. CBT-like methods can probably help with a wide variety of troubles that were thought out of scope. When it doesn't help, it seems that at least the problem gets less noticeable.
If people are teaching "Body Reprogramming" for fibromyalgia, maybe we could make people move their teeth for sleep apnea instead of waking up.
If one could synthesize "status", there would be less of need to spend its life pursuing status at the expense of others.
Through virtual reality, people could experience low or high status at will, leading to "status awareness". This is a very dangerous idea, since the idea of going upper class is structuring the whole society.
But this will be helpful to...
Today organizations rely on status, ostracism, and material advantages to function. Power is concentrated to hide information and appeal to our gregarious minds. Their primary mechanism is obedience.
There are lots of attempt to improve the organization, from sociocracy to holacracy. The new ways tend to conceal the inherent conflicts instead of giving them a corpse, and also increase engagement. Their primary mechanisme is status.
We need to find fair structures that can work with low-engagement AND fluid power AND unbalanced access to the power of speech. What will keep people motivated?
For that problem we will avoid all discussions of ownership, as they are often tautological.
Some things you can't know before building them, for example the outcome of an algorithm, or the cost of a bug fix.
Edouard resize images with lanczos3. He's trying the magic kernel, but the result has a new bug. What to do? Persevere or abandon the effort?
In entity E
codebase, there is an image resizer R
.
E
wants to try another image resizing algorithm A
, and take the best one. R' = R + A
is created.
Unfortunately, to use the new algorithm there is a bug fix B
to be found first, a memory corruption.
R'
better than R
) can only be assessed once R'
has been built.R''
with R'' = R' + B
has been built.We suppose independent decomposition of R' into R and A, and of R'' into R' and B.
**What should do E:
R
R' = R + A
? (the new resizer)R'' = R + A + B
? (the new resizer, fixed)R'
if we know the new algorithm can't be better.Obviously the ATM outcome of doing nothing is the "BATNA" choice, ATM(R)
.
For the R vs R' decision
The algorithm should be built if:
ATM(R') - ATM(R) > 0
Under our postulate of independence this is equivalent to:
ATM(A) > 0
A
being the specific modification of R that implements the new algorithm.
For the R' vs R'' decision
Likewise, the algorithm should be debugged if the fix is a positive ATM artifact:
ATM(B) > 0
We need a way to decomposate ATM alongside conditionals. Much like we assume stock price to vary in the future depending on yet unknown events (such as the earth exploding).
We'll define conditionals with small letters.
ATM(X | e) = ATM of artifact X, given than e will occur in the future.
ATM(X | ~e) = ATM of artifact X, given than e will not occur in the future.
ATM(X)
being a perfect, unknown price, it is equal to either exactly ATM(X | e)
or exactly ATM(X | ~e)
.
Let c
be the conditional "new algorithm better than older".
Let D
be the ATM gain of "having a better algorithm".
We estimate:
ATM_approx(A | c) = price of writing(A) + price-of-maintaining(A) + D
and
ATM_approx(A | ~c) = price of writing(A)
Thus we can estimate ATM(A)
:
ATM_approx(A) = ATM_approx(A | c) * prob(c) + ATM_approx(A | ~c) * (1 - prob(c))
So this is exactly a Bayesian rule.
Our solution is now:
ATM_approx(A) = price of writing(A) + ( price-of-maintaining(A) + D) * prob(c)
We can evaluate this to know approximately if this is worth it to attempt to write A
.
E
fix the bug?This is equivalent, E
should attempt to fix the bug if ATM_approx(B)
is estimated to be > 0.
Let b
be the conditional "E manages to fix the bug".
Let G
be the ATM gain of "having the bug fixed".
ATM(B | b) = price of writing(B) + ( price-of-maintaining(B) + G) * prob(b)
ATM(B | ~b) = price of failing to write(B)
The ATM of the fix is, in our analysis:
ATM(B) = price of failing to write(B) * (1 - prob(b)) + price of writing(B) + ( price-of-maintaining(B) + G)
This respects the tautological nature of our ATM theory, since if the bug fix can be found 100% of the time, then ATM(B) = ATM(B | b).
In order to process trauma, explain aesthetics, and understand various other brain phenomenons, I think it is helpful to reify a basic brain operation that we could call Associative Jumps.
tl;dr Mental objects are linked in a web.
Albert was evaluating a particular audio effect (tape machine emulation), when he is assaulted by overwhelming sad feelings that prevent him to continue his evaluation.
Question: What could be a rational explanation for this?
Heavily intoxicated, Albert founds out that hearing pitched noise create a heavy release of neurotransmitters. Frequency 487 Hz seem to generate oxytocin, while 2.2 KHz generates something that looks like acetylcholine. Another one is all about serotonine. Albert is impressed, goes on and concludes that chakras exist and that you can equalize them with sound! But this doesn't seem to work when Albert's attention is taken.
Question: What could be a rational explanation for this?
Actually this can be explained partly by placebo, but also by an increase in the associative engine in the brain due to the intoxication.
This all happens quickly enough to feel instant.
Question: How to validate this explanation?
Stepping into the chain and seeing if the associative chain still holds. In this case, you could do without the noise, just thinking about the body part.
In sober state, finding the associative jump can take more effort, especially since multiple of them are valid destinations. Open Monitoring may help here, by revealing the most obvious contributors.
It seems to be that most useful jumps are also the easiest to find. There is something that I don't understand though, it is that Associative Jumps do not seem to be as repeatable as expected.
DICLAIMER: I'm not a physiologist, and do not work in medicine. Do not start new addictions. I'm just dabbling. The reality is probably more complex.
In research of diagonalization, I feel that it can be helpful to identify body compounds by their imprint, in order to try to "invert" their effects on judgement.
Often the idea is to eat chemical precursors to increase the quantity effect of the compound.
A good way to develop a sense for effects of sugar is biking. Because this sport puts you closer to being an energy tank.
Using THC 1 hour after eating chick peas will produce a high that has a serotonin color. You might even feel a buzz from chick peas alone after one hour.
Fasting also increase serotonine.
Using THC 1 hour after eating red beans will produce a high that has a dopaminergic color. Beware: this is more anxiety-inducing than just THC. Panic attacks are to be expected.
This seems to increase placebo effects, or to be the root cause of it.
The so-called runner's high is partly anandamide.
2 hours before running, drink a combination of: paracetamol, powder chocolate and grapeseed oil. It has an awful taste. The effect is an increase in "runner's high".
The other component of the runner's high (BNDF) seems to require fasting and slow running a long-time.
Sensed the most the day after sport, especially if high intensity was involved.
General quantity of inflammation can be lowered with Chinese cabbage (for example eating half of it).
Instead, it can be increased with carbs, such as bread/pasta, which feels much worse.
The effect of increased GABA can be felt after yoga. I will probably need to find the particular stretching practice that does it the most to feel it better. It seems to increase sleep quality, much like going into Alpha State.
I'd like to hear ways to distinguish norepinefrine, glutamate, acetylcholine... It is also possible I'm mistaking one chemicals for another.
That is a very typical situation!
Entity E
owns software artifact A
. E
wants a new feature which is in library B
, completely developped by others. However, there is also the possibility to write its own feature instead of depending on B
.
Let A'
be the software artifact A
after the change.
E
wants to maximize its own ATM share of A'
(maximize: ATM-E(A')
).
Depending on the path followed, `A' could be equal to:
A'
= A
+ B
(dependency case)A'
= A
+ C
(write-your-own case)Posed as an inequality, E
should write its own if:
ATM-E(A + B) < ATM-E(A + C)
We know that A
and B
are independent.
ATM-E(A) + ATM-E(B) < ATM-E(A + C)
We know that E
owns A and C completely. That gives ATM-E(A) = ATM(A)
and ATM-E(C) = ATM(C)
.
ATM(A) + ATM-E(B) < ATM(A + C)
We know that A
and C
are not independent (it is expected that C
will reuse parts of A
), so we factor out their common part D
A = D + (A without C)
C = D + (C without A)
Replacing:
ATM(D) + ATM(A without C) + ATM-E(B) < 2 x ATM(D) + ATM(A without C) + ATM(C without A)
Simplifying:
ATM-E(B) < ATM(D) + ATM(C without A)
Simplifying again:
ATM-E(B) < ATM(C)
So we've gone in a full circle tautology: E
should write its own if the expected value of C
over its lifetime exceeds the value of B
, for E
, over its lifetime.
We haven't said anything there yet.
We postulate for this problem that ATM(x) will be split between:
C(x)
, that also includes learnings and injuries, not just initial workP(x)
M(x)
Of those:
P(x)
would be typically >= 0M(x)
would be typically <= 0C(x)
is much smaller in magnitude than M(x)
and P(x)
.Our equation becomes:
C-E(B) + M-E(B) + P-E(B) < C(C) + M(C) + P(C)
Because B
is completely owned by others, C-E(B) is zero:
M-E(B) + P-E(B) < C(C) + M(C) + P(C)
Now, we postulate that the feature leads to equal sales, whether it comes from artifact B or C. This is, debatable postulate.
P(C) = P-E(B)
M-E(B) < C(C) + M(C)
We stumble upon our second tautology: the cumulated maintenance ATM of C, plus its creation ATM, plus the learnings and injuries ATM, must be greater than the ATM cost of maintaining B
(partially).
I'd like to reason upon any kind of software artifacts:
In this blog these will be called "software artifacts".
A Software Artifact has:
Existence. Has a physical extent in lines of codes, documentation, or data presence Extent(x)
.
Arguably if it isn't there, then it doesn't exist.
Price. For each artifact x
, we note ATM(x)
the amount of Attention/Time/Money gained by x
existing.
This is the cost to write the artifact, the cost to maintain it, selling it, etc. aggregated over its total lifetime (like stocks).
More often than not, this ATM isn't known.
Two software artifacts A
and B
are equal iff Extent(A) == Extent(B)
.
They have equal ATM.
A software artifact is composed of other software artifacts, down to a unit level of minimum physical existence.
For example, the for loop is made of the for
keyword, itself composed of f
, o
, r
characters.
While it is difficult to think of an ATM just for a f
letter, we postulate that it exists.
Composition is noted A + B
. It means the artifact A and the artifact B, in juxtaposition, possibly overlapped.
Two artifacts A
and B
are independent if for every artifact a
that compose A
, a
cannot be found composing B
.
Example:
Company Comcom has two completely independent software products Toto and Titi,
that don't share a single file.
Postulate: if A
and B
are independent, then ATM(A + B) = ATM(A) + ATM(B)
.
Example:
The Toto product earns $1000 a month and the Titi product $2000 a month,
without any attention or time commitment, without sharing a single file.
Since they are independent, ATM(Titi + Toto) = ATM(Titi) + ATM(Toto).
Their earnings add up independently.
Note that we still don't know the actual value of ATM(Titi) or ATM(Toto).
If software artifacts A
and B
are not independent, there exist 3 independent artifacts a
, b
, C
such that:
A == C + a
B == C + b
In this case: ATM(A + B) = 2.ATM(C) + ATM(a) + ATM(b)
Example:
The Toto and Titi products now share some library code Baba.
Toto earns $10000 a month and the Titi product only $100 a month,
without any attention or time commitment.
ATM(Titi + Toto) = 2.ATM(Baba) + ATM(Toto-specific code) + ATM(Titi-specific code)
Intuitively, we find that the most reused software artifacts have the most impact in the earnings of Comcom the company.
In this case, ATM(Toto-specific-code)
is probably large, but we don't know it's exact value.
We also don't know the split between Baba
and Toto-specific code
.
In typical practice, a proxy for knowing ATM(x)
is the very measurable immediate sales generated by x.
This is wrong.
Because this is only an approximation:
x
. A fact often lamented in "technical debt" rants.While ATM is the primary measure of success in an artifact, we have only a passing understanding of its real value. Instead what is used is often a short-term, partial, proxy measurement for ATM: sales for the shareholder.
If you need to read only one article from this blog, read this one.
I was mildly interested in singing, and tried to learn to sing all by myself.
I reached a plateau pretty quickly. Whatever the exercises devised, and the quantity of singing performed, there would be no improvement anymore.
In the choruses I was part of, people were saying being an autodidact doesn't get you very far with singing, and your real learning only start at your first personal singing lesson. I couldn't believed that.
But they were right. After 4 years, I gave in and finally started as a student with Marie, a very skilled teacher. What I learned next was indeed much more solid and durable than my own mental model and completely replaced it. It showned me that, at least for me, it was completely wrong to self-learn singing.
The content of these lessons was, as I found out, aimed at breaking this kind of skill ceiling.
So what did we do differently?
A common theme of the course was largely decoupling. Marie once said she was trying to augment "neural recombination". I would also go jogging, as I noticed earlier that it helped learning.
So I got decoupled glottal opening, decoupled air flow, decoupled vocal tract, decoupled this-or-that resonance, and now sang the same way whatever the place (but not the crowd).
It was like those parameters were some kind of proper dimensions of singing, and I just had to map the former, correlated parameters to those new truer, decorrelated ones.
Often, once decoupled, a particular control was forgotten again, like larynx positionning.
I could now change one parameter (eg: air flow) without having the others moving, thus being able to sing a high note for example.
It turns out this is similar process to diagonalization.
In mathematics, diagonalization is a linear algebra process that allows to find the true independent basis vectors of a linear space.
Those true, clean values are called eigenvectors.
The magnitudes of these eigenvectors are different. Some are more important than others, a fact used in PCA, where the first row of the diagonal matrix is occupied by the most important vector (the most important factor of success, in a way).
To me, Diagonalization is the most powerful tool in the toolbox.
I later found out that this type of diagonialization is useful to all other domains, when you want to reach expertise. Intuitively, it makes sense that the best performers can identify best what is important or not in their domain.
My belief is that to be an expert (or even beat the experts) you need to re-learn your domain after first exposure, and go find its true eigenvectors, and their relative importance.
Now that I read the above article, there was other teaching tricks involved in the singing lessons: the good use of the Zone of Proximal Development, the student-teacher relationship, the sense of community/imitation with other singers... which seems to imply that it's not all about that diagonalization concept I'm such a fan of.
DICLAIMER: Do not read that article and do not Do It Yourself. This article is only my opinion.
Go find a licensed psychologist, that will find a respectful and manageable path towards processing traumas.
The mentionned techniques can be pretty confusing, harsh and generates permanent personnality changes. It is even the entire goal. But reactivation of traumatic memories can be very harmful and in some cases cause anxiety attacks and extreme levels of stress.
If you have to do that, do it from a position of comfort, when you are already feeling great.
After two episodes of DPDR, I forcefully became interested in lowering those stress levels for good. Despire its strangeness, DPDR is mechanical: it goes away if the stress goes down.
It turned out the main contributor to chronic high-stress levels was unprocessed traumatic memories. A thesis also defended on that other website.
I'm convinced everyone has a backlog of unprocessed traumas and could benefit from thinking about it, when the conditions feels right.
Activating the traumatic memories for examination create a stress of its own, and a chronic one, so it's better to process traumas from a comfortable position. Or, to process the ones that have already been reactivated in a chronic manner.
In other words: do things one by one and leave ample times before such works.
And it's also better to only treat the most recently activated source of stress.
Go in reverse temporality and do not start searching for traumas, start from the stress of daily life to find them.
Processing traumas gets easier with practice. Start with easy ones and go slow.
Example 1: Adam was driving, and see a dog about to cross the road. He slows down. But the dog doesn't move at all, he just stands here just next to the road, as if about to cross. It turns out it was probably dead and another car hit it in the nose. Adam is disgusted and this particular memory comes back after the trip.
Example 2: Bob watched a film named The Act of Killing. It is the most disturbing film he has ever watched. He regrets watching it and made a nightmare with a traumatic color and a film protagonist that says with a horrible voice "Are you going to kill me?".
Sounds small potatoes, yeah? Maybe, maybe not.
In almost every case, you will be astounded by the emotional charge associated with even small events like this. By association you will stumble upon the big stuff, sooner or later, but do not rush this.
Be kind with yourself and try to find inconsequential things, perhaps even non-traumatic things, to think about.
This article proposes a method (akin to Neuro-Linguistic Programming) to self-treat lingering traumas by reconsolidating traumatic memories. The Wikipedia article is better than this article so maybe jump your reading there.
Do you really want to read a method from an armchair psychologist?
Using Open Monitoring think about the distressing recent stress, or a permanent stress you've felt.
You will get a start point to your associative journey.
Exemple: Adam thinks about his sentiment of dread seeing the dead dog on the road.
Perform one (or more) of the most obvious Associative Jumps.
Be honest with yourself and find the true most obvious jumps. You can end your associative search on several types of mental-object:
So there are 3 possible scenarios here, only one of which is adressed in this article:
Case 2.1: The trauma is complex.
You end up on a "core belief" or a "value", unrelated to a single memory but a myriad of non-traumatic memories.
Exemple: "I was raised as if I had no value."
This is best treated with actual psychotherapy such as CBT, not by treating traumas.
Exit this tutorial.
Case 2.2: The trauma is singular.
You end up on a single memory from where most of the stress is originating.
Exemple: "They day where Adam's dog died."
Wait before you think about that particular memory. Continue avoiding it until safety is in place.
Go to step 3.
Case 2.3: The trauma is inter-generational (advanced).
After a mental "rite of passage" (that you can refuse), you end up on unspecific abstract traumas with no emotional charge, no associated memories.
Exemple: "The fuzzy feeling of being beaten by someone larger."
Unfortunately I do not have enough experience with that to advise you. The problem is that one doesn't have any traumatic memory to work with.
Exit this tutorial.
Your goal is to discharge the emotional content of the singular memory and "re-learn" it.
Use the following two powerful safety procedures:
Use the time-reverse method: explore your memory from a few hours after to the beginning in a reverse fashion. The film is in reverse in the cinema.
Use double disassociation: method: you are watching someone in a cinema who watch the film of your memory on a screen.
You should not be overwhelmed by emotions. Reliving the trauma as is (exposure) is NOT the goal.
What you can do then:
After you are done, the memory should be reintegrated/modified as a normal memory, without emotional charge. Release of tears and a sentiment of peace is a good sign. You can do this while staying calm, it need not be a panicky feeling.
One very disturbing thing at first is that there is no actual need to be faithful about what really happened in your new tampered memory since only the results matter. The truth matters less than your well-being. If the truth mattered so much, our memory wouldn't be rewriteable. But, traumatized people are often obsessed with realistic exact history and it can feel like "cheating".
For a large trauma your life have been clearly split in two disjointed timelines, before and after the event. And you don't feel the same person before and after.
It is possible that you meet a "trauma-holder", a personality fragment at the age that you were at the memory point. If you want to solve the trauma, you will accept the trauma holder as he/she is, and somehow merge it with your current self. If you do this, you will have a single merged life timeline again.
In computer terms, a trauma would be a bit like a new git branch
, and solving the traumas is akin to a git merge
.
But the analogy stops here, because the forked branch has been left undevelopped and in disarray.
Lowered stress levels seems permanent. Solving most singular traumas shift personnality for a few weeks. Some advanced mental manipulations like that can provoke larger swings in personnality (eg: maniac phase / depressive phase) that can oscillate for a few monthes.
If you don't want to be on your own, go consult a licensed psychologist.
A few people IRL know this about me, but in the past I've been technically "mad" with the trouble known as Depersonalization / Derealization Disorder (DPDR).
While many people experience symptoms of a depersonalization/derealization disorder during their life; I have a simple theory about why few people talk about it: it is difficult to explain and too shameful/asocial to tell.
I consider myself cured from 2012 to 2014 thanks to DIY trauma processing, a great method for a whole lot of troubles.
It feels sane, and culturally possible in 2021, to be in a position to admit to those weaknesses.
In the software industry you cannot really speak about things like burnout or mental illness without harming your economic prospects. Thankfull I become economically independent from the technical Guernica known as the Software Industry, and can talk about real-life problems freely.
Those troubled periods had a disproportionate impact on my life. One of the intent of this blog it to get over them; while statisfying your voyeurism edge.
Depersonalization/Derealization Disorder (DPDR) is little-known dissociative disorder with a prevalence of 1 to 2%, often undiagnosed and underresearched. It is essentially a defense mechanism against stress, that gets somehow stuck. It goes by several names: DPDR, DDD, DDS, DPD...
DPDR is strongly associated with childhood interpersonal trauma.
There seems to be 3 big types of meditation, two of which have been categorized as "Focused Attention" and "Open Monitoring" (Travis & Shear, 2010).
You can get a feel for those results:
You could meditate with focused attention and reach Alpha State, which takes some training.
You could meditate with open monitoring and without particular training, get troubling thoughts out of the way (which is super useful for stress).
In this article, with the example of hearing, I claim that there is likewise two ways of listening, completely parallel to those two meditation types.
In focused attention, you concentrate on an individual feature, possibly loosing the forest for the trees. Everything else that might tend to attract attention is actively ignored by redirecting attention constantly back on the same focus point.
Example: "Which sound has the best low-end?"?
In Open Monitoring listening, you would try to think about nothing and examinate the first thing that comes to your mind.
Example: "Which sound would you rather hear in your car?"
I claim that there is always those two ways of considering a perception (possibly: any mental object?).
Reifying Open Monitoring is super useful indeed, since by being open to ideas that come up, you can more easily perform Associative Jumps, or find a start point to your associative search.
Open Monitoring is called "Deconcentration of Attention" in the Russian litterature.
I'd like to introduce the unfortunate acronym ATM (Attention-Time-Money) as an aggregate measure of economical value of software. For simplicity, these resources can be considered the same quantity and convertible.
Is ATM "pain"? Is it "price"? It's better to think it as the price of a stock.
Much like stocks, the value of ATM (their "price") aggregates all future revenue/cost associated with a particular piece of software, down to the smallest unit.
Successful software artifacts generates positive ATM instead of consuming it (negative ATM).
If a software artifact has negative ATM, it is a software debt. If a software artifact has positive ATM, it is a software asset.
Assets and debt are the same thing, points on this flattened 1D axis of Attention, Time, and Money.
Another word for asset could be successful software, but I don't like it since it pollutes the mental images with images of what successful people do. I prefer the term of positive ATM software.
An advice on getting rich given by Robert T. Kiyosaki is to acquire assets and not acquire debt: a tautology that still deserved a book.
A translated advice on getting successful in software (positive ATM), is to acquire software assets and not acquire software debt.
This define a criterion for ruling out methods. Given alternative A and B, the alternative that would maximize ATM would be preferred in a sytematic way. However, it is not easily measurable at all, and we don't have here the equivalent of a Stock Market to the value of such a stock.
See: Software Artifacts as elementary units
Moreover, and more critically, ATM is the amount of current and future value the software would bring to everyone, but this ATM is splitted, very often unevenly, amongst several entities. Whether a software artifact is a debt or an asset depends upon who is considered as owner.
See: Who owns software?
Once we have the concept of ATM we can define the concept of software ownership more precisely.
The customer pays for software typically with Money, and derive some positive or negative value out of its existence.
Smart companies have their employees maintain "ownership" over codebases. The idea is that worker W
knows more than anyone else about software artifact A
. Repairing A will be less costly when W
is doing it.
In economical terms, by virtue of having W
know more about A
, ATM(A)
gets a higher value.
On the other hand, the company gets to be the "shareholder" and obviously pay for this software being created.
Who "owns" the software artifact A
? Is it W
or the company employing W
to work on A
? Is it C
the customer that bought it?
Our proposal is that ATM(A)
is typically split between:
W
,S
,C
.The new definition of ATM is that it is the sum of individual ATM contribution of every entity:
ATM(A) = sum[for every entity E](ATM(E, A))
Hence:
ATM(A) = ATM-W(A) + ATM-S(A) + ATM-C(A)
In our split with 3 entities it is the sum of ATM of individual actors. Here is how to account for it:
ATM-S
(ATM for Shareholder) would contain:
ATM-W
(ATM for Worker) would contain:
ATM-C
(ATM for Customer) would contain:
All these factors include current and future events.
Whether W, S, or C should be considered the "owner" of artifact A depends on the particular split.
Note that this is only one of the possible value split, since S, W and C can be one and the same.
Ben is a contractor hired by TotoMetrics to make a small bug fix on their JSON parsing library. Ben charges $500 for that fix. He is done in 5 minutes and go on with his day. he forgets everything about it. He never works for TotoMetrics again.
In ATM terms:
L
be the JSON parsing library with the bugLf
be the JSON parsing library after the fixS
be the shareholder TotoMetricsW
be BenThe company S hopes that:
ATM-S(Lf) > ATM-S(L) + $500
That is equivalent to:
ATM(Lf) - ATM-W(Lf) > ATM(L) - ATM-W(L) + $500
Now, for all intent and purpose Ben has no real exposure to L
. ATM-W(x)
is 0. That simplifies the equation to:
ATM(Lf) > ATM(L) + $500
S
in this case is the real "owner" of L
since its ATM fraction ATM-S
is equal to ATM
. S
will benefit completely from L
losses and gains, and W
won't be affected.
Steve has an idea and tells Walter his idea of a new piece of software A. Walter like the idea and propose to build it for only $1000. When Steve fails to sell the early result, Walter steals the code, sabotage Steve online store, and sells it for its own benefit with no additional development. The customer of the software derive just recoup their investment, as they get $10 of value for their $10 of membership.
In ATM terms:
A
be the product being builtS
be the shareholder SteveW
be the worker WalterC
be the customers (but we know that ATM-C(A)
is 0)The hope of Steve is to get a positive ATM for himself:
ATM-S(A) > 0
However if, as Steve will fail to sell the result forever it is pretty clear that ATM-S is known at: -$1000. No upside is expected for Steve forever, and salary was paid.
ATM-S(A) = -$1000
For all intent and purpose Steve no exposure to A
other than that loss.
Walter doesn't know if A
will be an asset (positive ATM), but he knows he gets a $1000 head start on its own share of ATM.
ATM-W(A) = ATM(A) + $1000
As Walter knows that its own rate on the market is $500/day, he hopes to have a good estimate of ATM(A)
before 2 days, so he knows whether to abandon the project or not.
Obviously, a measure of "ownership" would need to account for the possibility of negative ATM. I'm not sure if a zero ATM means no ownership either, of if the whole concept of "ownership" is moot.
In an ideal world, all software in existence would have positive ATM.
Contrarily Software with negative ATM is debt and wouldn't exist in that ideal world. Consequently, software debt doesn't deserve to exist (in the same way that there are no stocks with negative values).
If it were found that organized thoughts (and books) follow a similar ATM economics as software, then they also wouldn't deserve to exist.
The very act of forming and maintaining mental debt (instead of mental assets) would be a net loss to the individual ; in the same way that negative ATM software is a net loss to society in general.
In other words: not all books deserve to be written, and not all thoughts deserve to be formed. And there will be interests to pay on them too: bad books being read, or bad thoughts being maintained.
In the same ways that ATM is distributed across everyone (even negatively), maintaining mental debt can be efficient for society as a whole, just not for the open that holds that debt.
Strangely my conclusion is at odds with the premise.
DICLAIMER: I FORMALLY DECLINE RESPONSIBILITY FOR ANY HARM THAT COULD HAPPEN FOLLOWING THE READ OF THE FOLLOWING ARTICLE. Close this page now. This article isn't meant for anyone to read. You don't want to know. I'm only trying to remember.
This is a dangerous article to read. And it was annoying to write. I'm really just doing it to try to connect the dots. Reading about DPDR triggers a strange meditative state for me. I'm genuinely scared to write that shit article. Only read this is you are not an anxious person. There is an interesting article here that is perfectly adequate to explain the most interesting points.
DPDR is living without a personnality, and without "storytelling". What might seems at first like a curious metaphysical experience, is also a surprisingly painful condition.
It comes in various intensity levels. Light DPDR features only a few symptoms, but strong DPDR is a living hell.
You are the Observer. You are not living but watching someone live, that happens to be you.
You do not meet people but lifeless actors that believes they are people. The fact they believe they are people is of course ridiculous, because all humans are more like cardboard automatons.
The way they communicate with you makes it very clear they don't know they are carboard automatons, and they also believe in a fuckton of grand ideas that don't actually exist.
You do not walk in nature, but in a decor in the film of your life.
The Observer is a feeling that can be invoked independently of DPDR, once cured. It is quite hard to invoke in normal times. But in DPDR you are locked into this 3rd person mode: the most objective you have ever been. :) Consequently, you are more likely to perform "out of character" actions that will surprise and delight your peers.
While the disease itself is caused by stress, one do not feel stressed anymore while suffering DPDR. The cortisol is hidden by the dissociation. The stress feeling is still here, but underneath.
Basically your body is locked in fight-or-flight response.
In this situation, you are in need of coping mechanisms, such as watching someone else's story. Because the real source of the stress might be remote and other symptoms may complicate root cause analysis.
The sufferer is unable to have emotions like joy, love, sadness - and feels no attachment either. Seeing a loved-one provokes no more emotion that if you had encountered a lifeless robot.
Under DPDR, seeing old friends is like meeting strangers. Because - as we'll see later on - the concept of your friendship have disintegrated. Like most other concepts.
I guess because in a situation of acute danger, it isn't exactly critical to feel in love or sad.
Severe DPDR comes with a particular mental restlessness that can hamper the ability to sleep, think, and relax. Thoughts just won't stop. I think this is the worse symptom of heavy DPDR.
While the DPDR lasts, it very much seems like life outside dissociation will never come back, and was long ago. Something to be forgotten since that now, you're seemingly permanently altered (and not in a good way !). DPDR feels like the new normal, something final even, however twisted that is. Perhaps it was always this way?
The possibility that things may be back to normal looks very remote.
Strangely enough, life under such dissociation doesn't feel less "real" than normalcy. It actually feels more real since the "humanity layer" that was interpreting raw information is gone, and you are left working with an older, raw, analytic, bare-metal brain.
Life under DPDR is stripped of its human elements, and as such it feels obviously simpler and truer.
This is why I say personality is like being in a bubble: it filters information and creates meaning when there is sometimes little.
When things do go back to the normalcy you thought was impossible, you will have trouble remembering what really happened and in what order. It will seem very remote that you could have had no personnality for such a long period of time. Your time dissociated and non-dissociated feel like disjointed segments of life.
It's hard to explain properly this one after so many years. I have difficulty remembering symptoms from 2007.
Essentially, the forces that maintain continuity of meaning for things, concepts, words, and of course your personnality (which is a mental object like any other...), stop their course.
You would look at an pear, see it is of a particular pear shape, pear color, and has a word for it: "pear". But none of these things form a cohesive whole around an actual "pear" concept you might previously have.
In normal times, concepts have a sustained existence because we continuously nourish them.
Like the fast movement of the eye giving the impression of a detailed, high resolution visual world ; the fast meaning creation of the brain is giving the impression of whole well-separated concepts that help understanding the world.
The person experiencing DPDR see its personnality structure dissolved, because every other thought concept was also dissolved. They were broke down to their constitutive components, without the story-teller that makes them united.
In a way, personnality is a comforting bubble that sustain that meaning creation. It insulates from tons of raw, meaningless events that do not sustain the personal narrative. The Ego has a good User Experience that simplifies life for us. As far as I'm concerned, it is definately a Good Thing.
Music, films, and museums can give a lot more impression than usual. I would typically feel way more connection to music in this state. This is definately a positive, as you get to find profoundness in pretty average things. :)
DPDR disappears in a pretty much mechanical way: less stress leading to less dissociation.
Emotions, feelings, and information filtering come back. It is of course an immense relief, since you get to experience happiness again. What happens next is that a new personnality appears in about six month, not too far from the original.
This is also a very important and interesting time when you can seemingly choose the features of your newfound personnality bubble.
As a software developer (programmer) turned founder, there has been no shortage of surprising insight and realizations since I started on the business path in 2015.
Making a software business does teach a few non-obvious thing.
The best business advice for first-time founders is, in my opinion, the following:
"PICK AN EASY MARKET"
The corrolary of which is:
"DO NOT PICK A DIFFICULT MARKET"
and also:
"DO NOT PICK A NON-EXISTING MARKET"
That's it.
Having to do heroic efforts is indicative of a market that doesn't make lots of money for lots of people. Say, video games if you are the average independent game developer.
I believe picking a difficult market can be a consequence of bourgeois thinking.
By picking a difficult market (aka... prestigious) you happens is that:
All of which are really non-goals for a business.
As a reader of that blog, you may probably skew on richer and higher-educated. This is also the characteristics of early adopters (cf. history of inventions).
In general, most founders and the first users they will encouter will be a higher-class than the general population. This is bad.
Consequently the stereotypical start-up company will look like this:
That is inevitably far from realizing the growth potential, because you may inadvertently remove the majority of the target market with bourgeois thinking.
When I was a lowly snob intern at a very well known audio company, its founder told me very seriously that we were here to make software for everyone, not just snobs like me. Bourgeois thinking acts like a tax on society. This was one of the most on-point advice I ever got.
Go the easiest market you can find, that verifiably make a lot of easy money for a non-trivial group of people. Make a product to get your foot in the door, and go bully the weakest competitor in the room.
That is the business advice you won't hear from anyone, because everyone is shooting status-enhancing quotes from Steve Jobs.