Lucas, do you have evidence to back up you claim? Historically, speaking most JS engineers I know forget that the code on nome_modules wasn't written by them yet still count it as lines of code.
So it would nice do see your proof, remember you are defending your thesis here.
3M is the number of github issues, not all of them bug fixes. Researches were forced to recheck manually if it is a bug fix or not
Each is then manually assessed to determine whether or not it really is an attempt to fix a bug (as opposed to a feature enhancement, refactoring, etc.)
but they picked enough of them to make sure this is statistically significant result
To report results that generalize to the population of public bugs, we used the standard sample size computation to determine the number of bugs needed to achieve a specified confidence interval. On 19/08/2015, there were 3,910,969 closed bug reports in JavaScript projects on GitHub. We use this number to approximate the population. We set the confidence level and confidence interval to be 95% and 5%, respectively. The result shows that a sample of 384 bugs is sufficient for the experiment, which we rounded to 400 for convenience.
@stereobooster
I wasn't asking for support that type systems finds bugs, I was asking that the author of the article defend any of his points. :p
However, if I were to extrapolate the information from that article it would be that two years ago, when TypeScript was still new and lacking a large following (not many public modules were properly typed back then), it still had a significantly positive impact. If they ran that study again today I would imagine the results to be significantly higher, just from the new features in TypeScript let alone the number of fully typed libraries that now exist in DefinitelyTyped.
Sorry for my late reply, and thanks a lot @stereobooster
for the explanation 🙂
Besides that the researcher is a guy from Microsoft (which could make the research a bit biased), and I'm not a math expert.
But the confidence interval is affected by the variation & the sample size.
And, I find 400 bugs to be a ridiculously small sample size and you know that bugs can vary to an infinite interval.
Of course static typing would discover some bugs, but everything comes at a price.
The Price (blind spot that the research is not looking at):
Supposing the numbers were very accurate, Is the extra effort (using static typing) worth it (the discovery of 10% bugs)?
That 10% of bugs, are they really hard to discover bugs? or just ones that you would open the browser and you would find them right away?
Generally, bugs that devs find valuable to discover are the logical ones, not the syntax or statically-typed ones.
Typescript would require you to write more code.
And hey, LOC (lines of code) is a liability, NOT an asset... means that extra code need to be maintained.
I just started a discussion from a different angle here (feel free to join):
Programming languages enthusiast. Author of Learn Type Driven Development: https://www.packtpub.com/application-development/learn-type-driven-development
Besides that the researcher is a guy from Microsoft (which could make the research a bit biased
Yaser, firstly, there are three co-authors on the 'To type or not to type' paper, and two of them are listed as being at the University College London, and one at Microsoft Research. Now, you could argue that the Microsoft Research guy is pushing TypeScript because some other team in Microsoft makes it. Far-fetched, but sure. So then why would the paper say that both TypeScript and Flow were about equally effective? Flow is made by Facebook. Wouldn't that go against your bias argument?
But the confidence interval is affected by the variation & the sample size. ... And, I find 400 bugs to be a ridiculously small sample size
Well, that's how statistical analysis works. You don't need to trust this paper, calculating a sample size for a statistically-significant result is a well-known technique. Go to surveymonkey.com/mp/sample-size-ca... and plug in the numbers (population size 3 million, confidence level 95%, error margin 5%), you will get the same sample size 385.
The Price (blind spot that the research is not looking at):
Nope, they looked at it.
Is the extra effort (using static typing) worth it (the discovery of 10% bugs)?
The extra effort was timeboxed deliberately: they decided to look only at bugs that could be fixed by applying very simple types within a 10-minute window.
are they really hard to discover bugs? or just ones that you would open the browser and you would find them right away?
These were bugs that were shipped to and reported in production. So they passed all quality-control methods that the projects already had in place.
Typescript would require you to write more code. ... And hey, LOC (lines of code) is a liability, NOT an asset... means that extra code need to be maintained.
Unit tests also require you to write more code. That's a liability, don't write unit tests! ;-)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Lucas, do you have evidence to back up you claim? Historically, speaking most JS engineers I know forget that the code on nome_modules wasn't written by them yet still count it as lines of code.
So it would nice do see your proof, remember you are defending your thesis here.
Here you go blog.acolyer.org/2017/09/19/to-typ...
cc @yaser
I haven't gone through the research paper, and I skimmed the article real quick...
Could please correct me if I'm wrong:
Out of 3 million bugs they picked only 400 and studied them to reach that conclusion, right?
3M is the number of github issues, not all of them bug fixes. Researches were forced to recheck manually if it is a bug fix or not
but they picked enough of them to make sure this is statistically significant result
If you want more researches check this article dev.to/baetheus/thank-you-next-typ...
@stereobooster I wasn't asking for support that type systems finds bugs, I was asking that the author of the article defend any of his points. :p
However, if I were to extrapolate the information from that article it would be that two years ago, when TypeScript was still new and lacking a large following (not many public modules were properly typed back then), it still had a significantly positive impact. If they ran that study again today I would imagine the results to be significantly higher, just from the new features in TypeScript let alone the number of fully typed libraries that now exist in DefinitelyTyped.
Sorry for my late reply, and thanks a lot @stereobooster for the explanation 🙂
Besides that the researcher is a guy from Microsoft (which could make the research a bit biased), and I'm not a math expert.
But the confidence interval is affected by the variation & the sample size.
And, I find 400 bugs to be a ridiculously small sample size and you know that bugs can vary to an infinite interval.
Of course static typing would discover some bugs, but everything comes at a price.
The Price (blind spot that the research is not looking at):
Supposing the numbers were very accurate, Is the extra effort (using static typing) worth it (the discovery of 10% bugs)?
That 10% of bugs, are they really hard to discover bugs? or just ones that you would open the browser and you would find them right away?
Generally, bugs that devs find valuable to discover are the logical ones, not the syntax or statically-typed ones.
Typescript would require you to write more code.
And hey, LOC (lines of code) is a liability, NOT an asset... means that extra code need to be maintained.
I just started a discussion from a different angle here (feel free to join):
Will Typescript Make Your Software Bug Free?
Yaser Al-Najjar ・ Aug 13 '19 ・ 1 min read
Yaser, firstly, there are three co-authors on the 'To type or not to type' paper, and two of them are listed as being at the University College London, and one at Microsoft Research. Now, you could argue that the Microsoft Research guy is pushing TypeScript because some other team in Microsoft makes it. Far-fetched, but sure. So then why would the paper say that both TypeScript and Flow were about equally effective? Flow is made by Facebook. Wouldn't that go against your bias argument?
Well, that's how statistical analysis works. You don't need to trust this paper, calculating a sample size for a statistically-significant result is a well-known technique. Go to surveymonkey.com/mp/sample-size-ca... and plug in the numbers (population size 3 million, confidence level 95%, error margin 5%), you will get the same sample size 385.
Nope, they looked at it.
The extra effort was timeboxed deliberately: they decided to look only at bugs that could be fixed by applying very simple types within a 10-minute window.
These were bugs that were shipped to and reported in production. So they passed all quality-control methods that the projects already had in place.
Unit tests also require you to write more code. That's a liability, don't write unit tests! ;-)