CSS Interview Prep Quiz
Plus Css Cheat Sheet (82 questions total)
CSS Interview Prep Quiz
Plus Css Cheat Sheet (82 questions total)
Scroll Down To skip to quiz:
Complete CSS Cheat Sheet
#### Q1. In the following example, which selector has the highest specificity ranking for selecting the anchor link element?
ul li a
a
.example a
div a
- [x]
.example a
- [ ]
div a
- [ ]
a
- [ ]
ul li a
Q2. Using an attribute selector, how would you select an <a>
element with a "title" attribute?
- [x] a[title]{…}
- [ ] a > title {…}
- [ ] a.title {…}
- [ ] a=title {…}
Q3. CSS grid and flexbox are now becoming a more popular way to create page layouts. However, floats are still commonly used, especially when working with an older code base, or it you need to support older browser version. What are two valid techniques used to clear floats?
- [ ] Use the “clearfix hack” on the floated element and add a float to the parent element.
- [ ] Use the overflow property on the floated element or the “clearfix hack” on either the floated or parent element.
- [ ] Use the “clearfix hack” on the floated element or the overflow property on the parent element.
- [x] Use the “clearfix hack” on the parent element or use the overflow property with a value other than “visible.”
Q4. What element(s) do the following selectors match to?
1) .nav {...}
2) nav {...}
3) #nav {...}
- [ ]
1. An element with an ID of "nav"
2. A nav element
3. An element with a class of "nav"
-
[ ]
They all target the same nav element.
[x]
1. An element with an class of "nav"
2. A nav element
3. An element with a id of "nav"
- [ ]
1. An element with an class of "nav"
2. A nav element
3. An div with a id of "nav"
Q5. When adding transparency styles, what is the difference between using the opacity property versus the background property with an rgba()
value?
- [ ] Opacity specifies the level of transparency of the child elements. Background with an
rgba()
value applies transparency to the background color only. - [ ] Opacity applies transparency to the background color only. Background with an
rgba()
value specifies the level of transparency of an element, as a whole, including its content. - [x] Opacity specifies the level of transparency of an element, including its content. Background with an
rgba()
value applies transparency to the background color only. - [ ] Opacity applies transparency to the parent and child elements. Background with an
rgba()
value specifies the level of transparency of the parent element only.
Q6. What is true of block and inline elements? (Alternative: Which statement about block and inline elements is true?)
- [ ] By default, block elements are the same height and width as the content container between their tags; inline elements span the entire width of its container.
- [x] By default, block elements span the entire width of its container; inline elements are the same height and width as the content contained between their tags.
- [ ] A
element is an example of an inline element.
is an example of a block element. - [ ] A
<span>
is an example of a block element. `` is an example of an inline element.
Q7. CSS grid introduced a new length unit, fr, to create flexible grid tracks. Referring to the code sample below, what will the widths of the three columns be?
.grid {
display: grid;
width: 500px;
grid-template-columns: 50px 1fr 2fr;
}
- [ ] The first column will have a width of 50px. The second column will be 50px wide and the third column will be 100px wide.
- [x] The first column will have a width of 50px. The second column will be 150px wide and the third column will be 300px wide.
- [ ] The first column will have a width of 50px. The second column will be 300px wide and the third column will be 150px wide.
- [ ] The first column will have a width of 50px. The second column will be 500px wide and the third column will be 1000px wide.
Q8. What is the line-height property primarily used for?
- [x] to control the height of the space between two lines of content
- [ ] to control the height of the space between heading elements
- [ ] to control the height of the character size
- [ ] to control the width of the space between characters
Q9. Three of these choices are true about class selectors. Which is NOT true?
- [ ] Multiple classes can be used within the same element.
- [ ] The same class can be used multiple times per page.
- [ ] Class selectors with a leading period
- [x] Classes can be used multiple times per page but not within the same element.
Q10. There are many properties that can be used to align elements and create page layouts such as float, position, flexbox and grid. Of these four properties, which one should be used to align a global navigation bar which stays fixed at the top of the page?
- [x] position
- [ ] flexbox
- [ ] grid
- [ ] float
Q11. In the shorthand example below, which individual background properties are represented?
background: blue url(image.jpg) no-repeat scroll 0px 0px;
-
[x]
background-color: blue;
background-image: url(image.jpg);
background-repeat: no-repeat;
background-attachment: scroll;
background-position: 0px 0px; -
[ ]
background-color: blue;
background-img: url(image.jpg);
background-position: no-repeat;
background-scroll: scroll;
background-size: 0px 0px; -
[ ]
background-color: blue;
background-src: url(image.jpg);
background-repeat: no-repeat;
background-wrap: scroll;
background-position: 0px 0px; -
[ ]
background-color: blue;
background-src: url(image.jpg);
background-repeat: no-repeat;
background-scroll: scroll;
background-position: 0px 0px;
Q12. In the following example, according to cascading and specificity rules, what color will the link be?
.example {
color: yellow;
}
ul li a {
color: blue;
}
ul a {
color: green;
}
a {
color: red;
}
<ul>
<li><a href="#">link</a></li>
<li>list item</li>
<li>list item</li>
</ul>
- [ ] green
- [x] yellow
- [ ] blue
- [ ] red
Q13. When elements overlap, they are ordered on the z-axis (i.e., which element covers another). The z-index property can be used to specify the z-order of overlapping elements. Which set of statements about the z-index property are true?
- [x] Larger z-index values appear on top of elements with a lower z-index value. Negative and positive numbers can be used. z-index can only be used on positioned elements.
- [ ] Smaller z-index values appear on top of elements with a larger z-index value. Negative and positive numbers can be used. z-index must also be used with positioned elements.
- [ ] Larger z-index values appear on top of elements with a lower z-index value. Only positive numbers can be used. z-index must also be used with positioned elements.
- [ ] Smaller z-index values appear on top of elements with a larger z-index value. Negative and positive numbers can be used. z-index can be used with or without positioned elements.
Q14. What is the difference between the following line-height settings?
line-height: 20px
line-height: 2
- [x] The value of 20px will set the line-height to 20px. The value of 2 will set the line-height to twice the size of the corresponding font-size value.
- [ ] The value of 20px will set the line-height to 20px. The value of 2 is not valid.
- [ ] The value of 20px will set the line-height to 20px. The value of 2 will default to a value of 2px.
- [ ] The value of 20px will set the line-height to 20px. The value of 2 will set the line-height to 20% of the corresponding font-size value.
Q15. In the following example, what color will paragraph one and paragraph two be? (Alternative: In this example, what color will paragraphs one and two be?)
<p>paragraph one</p>
<p>paragraph two</p>
section p {
color: red;
}
section + p {
color: blue;
}
- [ ] Paragraph one will be blue, paragraph two will be red.
- [ ] Both paragraphs will be blue.
- [x] Paragraphs one will be red, paragraph two will be blue.
- [ ] Both paragraphs will be red.
Q16.What are three valid ways of adding CSS to an HTML page?
- [ ]
1. External; CSS is written in a separate file.
2. Inline; CSS is added to the of the HTML page.
3. Internal; CSS is included within the HTML tags.
- [ ]
1. External; CSS is written in a separate file and is linked within the element of the HTML file.
2. Inline; CSS is added to the HTML tag.
3. Internal; CSS is included within the element of the HTML file.
- [ ]
1. External; CSS is written in a separate file and is linked within the element of the HTML file.
2. Internal; CSS is included within the element of the HTML file.
3. Inline; CSS is added to the HTML tag.
- [x]
1. External; CSS is written in a separate file and is linked within the element of the HTML file .
2. Inline; CSS is added to the HTML tag.
3. Internal; CSS is included within the element of the HTML file.
Q17. Which of the following is true of the SVG image format? (Alternative: Which statement about the SVG image format is true?)
- [ ] CSS can be applied to SVGs but JavaScript cannot be.
- [ ] SVGs work best for creating 3D graphics.
- [x] SVGs can be created as a vector graphic or coded using SVG specific elements such as <svg>, <line>, and <ellipse>.
- [ ] SVGs are a HAML-based markup language for creating vector graphics.
Q18. In the example below, when will the color pink be applied to the anchor element?
a:active {
color: pink;
}
- [ ] The color of the link will display as pink after its been clicked or if the mouse is hovering over the link.
- [ ] The color of the link will display as pink on mouse hover.
- [x] The color of the link will display as pink while the link is being clicked but before the mouse click is released.
- [ ] The color of the link will display as pink before it has been clicked.
Q19. To change the color of an SVG using CSS, which property is used?
- [ ] Use background-fill to set the color inside the object and stroke or border to set the color of the border.
- [ ] The color cannot be changed with CSS.
- [ ] Use fill or background to set the color inside the object and stroke to set the color of the border.
- [x] Use fill to set the color inside the object and stroke to set the color of the border.
Q20. When using position: fixed, what will the element always be positioned relative to?
- [ ] the closest element with position: relative
- [x] the viewport
- [ ] the parent element
- [ ] the wrapper element
Q21. By default, a background image will repeat \_\_\_
- [ ] only if the background-repeat property is set to repeat
- [x] indefinitely, vertically, and horizontally
- [ ] indefinitely on the horizontal axis only
- [ ] once, on the x and y axis
Q22. When using media queries, media types are used to target a device category. Which choice lists current valid media types?
- [ ] print, screen, aural
- [ ] print, screen, television
- [x] print, screen, speech
- [ ] print, speech, device
Q23. How would you make the first letter of every paragraph on the page red?
- [x] p::first-letter { color: red; }
- [ ] p:first-letter { color: red; }
- [ ] first-letter::p { color: red; }
- [ ] first-letter:p { color: red; }
Q24. In this example, what is the selector, property, and value?
p {
color: #000000;
}
-
[ ]
"p" is the selector
"#000000" is the property
"color" is the value -
[x]
"p" is the selector
"color" is the property
"#000000" is the value -
[ ]
"color" is the selector
"#000000" is the property
"#p" is the value -
[ ]
"color" is the selector
"p" is the property
"#000000" is the value
Q25. What is the rem unit based on?
- [ ] The rem unit is relative to the font-size of the p element.
- [ ] You have to set the value for the rem unit by writing a declaration such as rem { font-size: 1 Spx; }
- [ ] The rem unit is relative to the font-size of the containing (parent) element.
- [x] The rem unit is relative to the font-size of the root element of the page.
Q26.Which of these would give a block element rounded corners?
- [ ] corner-curve: 10px
- [ ] border-corner: 10px
- [x] border-radius: 10px
- [ ] corner-radius: 10px
Q27. In the following media query example, what conditions are being targeted?
@media (min-width: 1024px), screen and (orientation: landscape) { … }
- [x] The rule will apply to a device that has either a width of 1024px or wider, or is a screen device in landscape mode.
- [ ] The rule will apply to a device that has a width of 1024px or narrower and is a screen device in landscape mode.
- [ ] The rule will apply to a device that has a width of 1024px or wider and is a screen device in landscape mode.
- [ ] The rule will apply to a device that has a width of 1024px or narrower, or is a screen device in landscape mode.
Q28. CSS transform properties are used to change the shape and position of the selected objects. The transform-origin property specifies the location of the element’s transformation origin. By default, what is the location of the origin?
- [x] the top left corner of the element
- [ ] the center of the element
- [ ] the top right corner of the element
- [ ] the bottom left of the element
Q29. Which of the following is not a valid color value?
- [ ] color: #000
- [ ] color: rgb(0,0,0)
- [ ] color: #000000
- [x] color: 000000
Q30. What is the vertical gap between the two elements below?
Div 1
Div 2
- [x] 2rem
- [ ] 32px
- [ ] 64px
- [ ] 4rem
Q31. When using the Flexbox method, what property and value is used to display flex items in a column?
- [x] flex-flow: column; or flex-direction: column
- [ ] flex-flow: column;
- [ ] flex-column: auto;
- [ ] flex-direction: column;
Q32. Which type of declaration will take precedence?
- [ ] any declarations in user-agent stylesheets
- [x] important declarations in user stylesheets
- [ ] normal declarations in author stylesheets
- [ ] important declarations in author stylesheets
Q33. The flex-direction property is used to specify the direction that flex items are displayed. What are the values used to specify the direction of the items in the following examples?
- [x] Example 1:
flex-direction: row;
Example 2:flex-direction: row-reverse;
Example 3:flex-direction: column;
Example 4:flex-direction: column-reverse;
- [ ] Example 1:
flex-direction: row-reverse;
Example 2:flex-direction: row;
Example 3:flex-direction: column-reverse;
Example 4:flex-direction: column;
- [ ] Example 1:
flex-direction: row;
Example 2:flex-direction: row-reverse;
Example 3:flex-direction: column;
Example 4:flex-direction: reverse-column;
- [ ] Example 1:
flex-direction: column;
Example 2:flex-direction: column-reverse;
Example 3:flex-direction: row;
Example 4:flex-direction: row-reverse;
Q34. There are two sibling combinators that can be used to select elements contained within the same parent element; the general sibling combinator (~) and the adjacent sibling combinator (+). Referring to example below, which elements will the styles be applied to?
h2 ~ p {
color: blue;
}
h2 + p {
background: beige;
}
<p>paragraph 1</p>
<h2>Heading</h2>
<p>paragraph 2</p>
<p>paragraph 3</p>
- [ ] Paragraphs 2 and 3 will be blue. The h2 and paragraph 2 will have a beige background.
- [x] Paragraphs 2, and 3 will be blue, and paragraph 2 will have a beige background.
- [x] Paragraphs 2 and 3 will be blue. Paragraph 2 will have a beige background.
- [ ] Paragraph 2 will be blue. Paragraphs 2 and 3 will have a beige background.
Q35. When using flexbox, the “justify-content” property can be used to distribute the space between the flex items along the main axis. Which value should be used to evenly distribute the flex items within the container shown below?
- [x] justify-content: space-around;
- [ ] justify-content: center;
- [ ] justify-content: auto;
- [ ] justify-content: space-between;
Q36. There are many advantages to using icon fonts. What is one of those advantages?
- [ ] Icon fonts increase accessibility.
- [ ] Icon fonts can be used to replace custom fonts.
- [x] Icon fonts can be styled with typography related properties such as font-size and color.
- [ ] Icon fonts are also web safe fonts.
Q37. What is the difference between display:none
and visibility:hidden
?
- [ ] Both will hide the element on the page, but display:none has greater browser support. visibility:hidden is a new property and does not have the best browser support
- [ ] display:none hides the elements but maintains the space it previously occupied. visibility:hidden will hide the element from view and remove it from the normal flow of the document
- [x] display:none hides the element from view and removes it from the normal flow of the document. visibility:hidden will hide the element but maintains the space it previously occupied.
- [ ] There is no difference; both will hide the element on the page
Q38. What selector and property would you use to scale an element to be 50% smaller on hover?
- [ ] element:hover {scale: 0.5;}
- [x] element:hover {transform: scale(0.5);}
- [ ] element:hover {scale: 50%;}
- [ ] element:hover {transform: scale(50%);}
Q39. Which statement regarding icon fonts is true?
- [ ] Icon fonts can be inserted only using JavaScript.
- [ ] Icon fonts are inserted as inline images.
- [ ] Icon fonts require browser extensions.
- [x] Icon fonts can be styled with typography-related properties such as font-size and color.
Q40. The values for the font-weight property can be keywords or numbers. For each numbered value below, what is the associated keyword?
font-weight: 400; font-weight: 700;
- [ ] bold; normal
- [x] normal; bold
- [ ] light; normal
- [ ] normal; bolder
Q41. If the width of the container is 500 pixels, what would the width of the three columns be in this layout?
.grid { display: grid; grid-template-columns: 50px 1fr 2fr; }
- [x] 50px, 150px, 300px
- [ ] 50px, 200px, 300px
- [ ] 50px, 100px, 200px
- [ ] 50px, 50px, 100px
Q42. Using the :nth-child pseudo class, what would be the most efficient way to style every third item in a list, no matter how many items are present, starting with item 2?
-
[ ]
li:nth-child(3 + 2n) {
margin: 0 5 px;
} -
[x]
li:nth-child(3n + 2) {
margin: 0 5 px;
} -
[ ]
li:nth-child(2),
li:nth-child(5),
li:nth-child(8) {
margin: 0 5 px;
} -
[ ]
li:nth-child(2n + 3) {
margin: 0 5 px;
}
Q43. Which selector would select only internal links within the current page?
- [ ]
a[href="#"] {...}
- [ ]
a[href~="#"]
- [ ]
a[href^="#"]
- [x]
a[href="#"]
Q44. What is not true about class selectors?
- [x] Only one class value can be assigned to an element.
- [ ] An element can have multiple class value.
- [ ] Class selectors are marked with a leading period.
- [ ] More than one element can have the same class value.
Q45. What is the difference between the margin and padding properties?
- [ ] Margin adds space around and inside of an element; padding adds space only inside of an element.
- [x] Margin adds space around an element; padding adds apace inside of an element.
- [ ] Margin adds a line around an element, padding adds space inside of an element.
- [ ] Margin adds space inside of an element, padding adds space around an element.
Q46. What is not a valid way of declaring a padding value of 10 pixels on the top and bottom, and 0 pixels on the left and right?
- [x] padding: 10px 10px 0px 0px;
- [ ] padding: 10px 0px;
- [ ] padding: 10px 0;
- [ ] padding: 10px 0px 10px 0px;
Q47. Is there an error in this code? If so, find the best description of the problem
@font-face {
font-family: 'Avenir', sans-serif;
src: url('avenir.woff2') format('woff2'), url('avenir.woff') format('woff');
}
- [ ] The font file formats are not supported in modern browsers.
- [ ] The src attribute requires a comma between the URL and format values.
- [ ] There are no errors in the example.
- [x] The sans-serif inclusion is problematic.
Q48. Which style places an element at a fixed location within its container?
- [x] position: absolute;
- [ ] display: flex;
- [ ] display: block;
- [ ] float: left;
Q49. The calc() CSS function is often used for calculating relative values. In the example below, what is the specified margin-left value?
.example {
margin-left: calc(5% + 5px);
}
- [x] The left margin value is equal to 5% of its parents element’s width plus 5px
- [ ] The left margin value is equal to 5% of the viewport width plus 5px
- [ ] The left margin value is equal to 5% of the closest positioned element’s width plus 5px
- [ ] The left margin value is equal to 5% of the selected element’s width (.example) plus 5px
Q50. What is the CSS selector for an <a>
tag containing the title attribute?
- [x]
a[title]
- [ ]
a > title
- [ ]
a=title
- [ ]
a.title
Q51. Which code would you use to absolutely position an element of the logo class?
- [x]
.logo { position: absolute; left: 100px; top: 150px; }
- [ ]
.logo { position: absolute; margin-left: 100px; margin-top: 150px; }
- [ ]
.logo { position: absolute; padding-left: 100px; padding-top: 150px; }
- [ ]
.logo { position: absolute; left-padding: 100px; top-padding: 150px; }
Q52. In this example, what color will Paragraph 1 be?
p:first-of-type {
color: red;
}
p {
color: blue;
}
.container {
color: yellow;
}
p:first-child {
color: green;
}
<h1>Heading</h1>
<p>Paragraph1</p>
<p>Paragraph2</p>
- [ ] blue
- [ ] green
- [x] red
- [ ] yellow
Q53. What is the ::placeholder pseudo-element
used for?
- [x] It is used to format the appearance of placeholder text within a form control.
- [ ] It specifies the default input text for a form control.
- [ ] It writes text content into a hyperlink tooltip.
- [ ] It writes text content into any page element.
Q54. Which statement is true of the single colon (:
) or double colon (::
) notations for pseudo-elements-for example, ::before
and :before
?
- [ ] All browsers support single and double colons for new and older pseudo-elements. So you can use either but it is convention to use single colons for consistency.
- [ ] In CSS3, the double colon notation (
::
) was introduced to create a consistency between pseudo-elements from pseudo-classes. For newer browsers, use the double colon notation. For IE8 and below, using single colon notation (:
). - [ ] Only the new CSS3 pseudo-elements require the double colon notation while the CSS2 pseudo-elements do not.
- [x] In CSS3, the double colon notation (
::
) was introduced to differentiate pseudo-elements from pseudo-classes. However, modern browsers support both formats. Older browsers such as IE8 and below do not.
Q55. Which choice is not valid value for the font-style property?
- [ ] normal
- [ ] italic
- [x] none
- [ ] oblique
Q56. When would you use the @font-face method?
- [ ] to set the font size of the text
- [x] to load custom fonts into stylesheet
- [ ] to change the name of the font declared in the font-family
- [ ] to set the color of the text
Q57. When elements within a container overlap, the z-index property can be used to indicate how those items are stacked on top of each other. Which set of statements is true?
- [x]
1. Larger z-index values appear on top elements with a lower z-index value.
2. Negative and positive number can be used.
3. z-index can be used only on positioned elements.
- [ ]
1. Smaller z-index values appear on top of elements with a larger z-index value.
2. Negative and positive numbers can be used.
3. z-index can be used with or without positioned elements.
- [ ]
1. Smaller z-index values appear on top of elements with a larger z-index value.
2. Negative and positive number can be used.
3. z-index must also be used with positioned elements.
- [ ]
1. Larger z-index values appear on top of elements with a lower z-index value.
2. Only positive number can be used.
3. z-index must also be used with positioned elements.
Q58. You have a large image that needs to fit into a 400 x 200 pixel area. What should you resize the image to if your users are using Retina displays?
- [ ] 2000 x 1400 pixels
- [ ] 200 x 100 pixels
- [x] 800 x 400 pixels
- [ ] 400 x 200 pixels
Q59. In Chrome’s Developer Tools view, where are the default styles listed?
- [x] under the User Agent Stylesheet section on the right
- [ ] in the third panel under the Layout tab
- [ ] under the HTML view on the left
- [ ] in the middle panel
Q60. While HTML controls document structure, CSS controls _.
- [ ] semantic meaning
- [ ] content meaning
- [ ] document structure
- [x] content appearance
Q61. What is the recommended name you should give the folder that holds your project’s images?
- [x] images
- [ ] #images
- [ ] Images
- [ ] my images
Q62. What is an advantage of using inline CSS?
- [ ] It is easier to manage.
- [x] It is easier to add multiple styles through it.
- [ ] It can be used to quickly test local CSS overrides.
- [ ] It reduces conflict with other CSS definition methods.
Q63.Which W3C status code represents a CSS specification that is fully implemented by modern browsers?
- [ ] Proposed Recommendation
- [ ] Working Draft
- [x] Recommendation
- [ ] Candidate Recommendation
Q64. Are any of the following declarations invalid?
color: red; /* declaration A */
font-size: 1em; /* declaration B */
padding: 10px 0; /* declaration C */
- [ ] Declaration A is invalid.
- [ ] Declaration B is invalid.
- [ ] Declaration C is invalid.
- [x] All declarations are valid.
Q65. Which CSS will cause your links to have a solid blue background that changes to semitransparent on hover?
-
[x]
a:link {
background: #0000ff;
}
a:hover {
background: rgba(0, 0, 255, 0.5);
} -
[ ]
a {
color: blue;
}
a:hover {
background: white;
} -
[ ]
a:link {
background: blue;
}
a:hover {
color: rgba(0, 0, 255, 0.5);
} -
[ ]
a:hover {
background: rgba(blue, 50%);
}
a:link {
background: rgba(blue);
}
Q66. Which CSS rule takes precedence over the others listed?
- [ ]
div.sidebar {}
- [ ]
* {}
- [x]
div#sidebar2 p {}
- [ ]
.sidebar p {}
Q67. The body of your page includes some HTML sections. How will it look with the following CSS applied?
body {
background: #ffffff; /* white */
}
section {
background: #0000ff; /* blue */
height: 200px;
}
- [x] blue sections on a white background
- [ ] Yellow sections on a blue background
- [ ] Green sections on a white background
- [ ] blue sections on a red background
Q68. Which CSS keyword can you use to override standard source order and specificity rules?
- [ ]
!elevate!
- [ ]
*prime
- [ ]
override
- [x]
!important
Q69. You can use the _ pseudo-class to set a different color on a link if it was clicked on.
- [x]
a:visited
- [ ]
a:hover
- [ ]
a:link
- [ ]
a:focus
Q70. Which color will look the brightest on your screen, assuming the background is white?
- [ ]
background-color: #aaa;
- [ ]
background-color: #999999;
- [ ]
background-color: rgba(170,170,170,0.5);
- [x]
background-color: rgba(170,170,170,0.2);
Q71. Which CSS selector can you use to select all elements on your page associated with the two classes header and clear?
- [ ]
."header clear" {}
- [ ]
header#clear {}
- [x]
.header.clear {}
- [ ]
.header clear {}
Q72. A universal selector is specified using a(n) _.
- [ ] “h1” string
- [ ] “a” character
- [ ] “p” character
- [x] “*” character
Q73. In the following CSS code, 'h1'
is the _, while 'color'
is the _.
h1 {
color: red;
}
- [ ] property; declaration
- [ ] declaration; rule
- [ ] “p” character
- [x] selector; property
Q74. What is an alternate way to define the following CSS rule?
font-weight: bold;
- [ ] font-weight: 400;
- [ ] font-weight: medium;
- [x] font-weight: 700;
- [ ] font-weight: Black;
Q75. You want your styling to be based on a font stack consisting of three fonts. Where should the generic font for your font family be specified?
- [ ] It should be the first one on the list.
- [ ] Generic fonts are discouraged from this list.
- [x] It should be the last one on the list.
- [ ] It should be the second one on the list.
Q76. What is one disadvantage of using a web font service?
- [ ] It requires you to host font files on your own server.
- [ ] It uses more of your site’s bandwidth.
- [ ] It offers a narrow selection of custom fonts.
- [x] It is not always a free service.
Q77. How do you add Google fonts to your project?
- [x] by using an HTML link element referring to a Google-provided CSS
- [ ] by embedding the font file directly into the project’s master JavaScript
- [ ] by using a Google-specific CSS syntax that directly links to the desired font file
- [ ] by using a standard font-face CSS definition sourcing a font file on Google’s servers
Q78. which choice is not a valid color?
- [ ] color:
#000
; - [ ] color:
rgb(0,0,0)
; - [ ] color:
#000000
; - [x] color:
000000
;
Q79. Using the following HTML and CSS example, what will equivalent pixel value be for .em and .rem elements?
html {font-size: 10px}
body {font-size: 2rem;}
.rem {font-size: 1.5rem;}
.em {font-size: 2em;}
<p></p>
<p></p>
- [ ] The .rem will be equivalent to 25px; the .em value will be 20px.
- [ ] The .rem will be equivalent to 15px; the .em value will be 20px.
- [ ] The .rem will be equivalent to 15px; the .em value will be 40px.
- [ ] The .rem will be equivalent to 20px; the .em value will be 40px.
Q80. In this example, according to cascading and specificity rules, what color will the link be?
.example {color: yellow;}
ul li a {color: blue;}
ul a {color: green;}
a {color: red;}
<ul>
<li><a href="#">link</a></li>
<li>list item</li>
<li>list item</li>
</ul>
- [ ] blue
- [ ] red
- [x] yellow
- [ ] green
Q81. What property is used to adjust the space between text characters?
- [ ]
font-style
- [ ]
text-transform
- [ ]
font-variant
- [x]
letter-spacing
#### Q82. What is the correct syntax for changing the curse from an arrow to a pointing hand when it interacts with a named element?
- [x]
.element {cursor: pointer;}
- [ ]
.element {cursor: hand;}
- [ ]
.element {cursor: move-hand;}
- [ ]
.element {cursor: pointer-hand;}
By Bryan Guner on June 3, 2021.
Exported from Medium on August 24, 2021.
Data Structures & Algorithms Resource List Part 1
Guess the author of the following quotes:
Data Structures & Algorithms Resource List Part 1
Guess the author of the following quotes:
> Talk is cheap. Show me the code.
> Software is like sex: it’s better when it’s free.
> Microsoft isn’t evil, they just make really crappy operating systems.
Here’s some more:
- The Framework for Learning Algorithms and intense problem solving exercises
- Algs4: Recommended book for Learning Algorithms and Data Structures
- An analysis of Dynamic Programming
- Dynamic Programming Q&A — What is Optimal Substructure
- The Framework for Backtracking Algorithm
- Binary Search in Detail: I wrote a Poem
- The Sliding Window Technique
- Difference Between Process and Thread in Linux
- Some Good Online Practice Platforms
- Dynamic Programming in Details
- Dynamic Programming Q&A — What is Optimal Substructure
- Classic DP: Longest Common Subsequence
- Classic DP: Edit Distance
- Classic DP: Super Egg
- Classic DP: Super Egg (Advanced Solution)
- The Strategies of Subsequence Problem
- Classic DP: Game Problems
- Greedy: Interval Scheduling
- KMP Algorithm In Detail
- A solution to all Buy Time to Buy and Sell Stock Problems
- A solution to all House Robber Problems
- 4 Keys Keyboard
- Regular Expression
- Longest Increasing Subsequence
- The Framework for Learning Algorithms and intense problem solving exercises
- Algs4: Recommended book for Learning Algorithms and Data Structures
- Binary Heap and Priority Queue
- LRU Cache Strategy in Detail
- Collections of Binary Search Operations
- Special Data Structure: Monotonic Stack
- Special Data Structure: Monotonic Stack
- Design Twitter
- Reverse Part of Linked List via Recursion
- Queue Implement Stack/Stack implement Queue
- My Way to Learn Algorithm
- The Framework of Backtracking Algorithm
- Binary Search in Detail
- Backtracking Solve Subset/Permutation/Combination
- Diving into the technical parts of Double Pointers
- Sliding Window Technique
- The Core Concept of TwoSum Problems
- Common Bit Manipulations
- Breaking down a Complicated Problem: Implement a Calculator
- Pancake Sorting Algorithm
- Prefix Sum: Intro and Concept
- String Multiplication
- FloodFill Algorithm in Detail
- Interval Scheduling: Interval Merging
- Interval Scheduling: Intersections of Intervals
- Russian Doll Envelopes Problem
- A collection of counter-intuitive Probability Problems
- Shuffle Algorithm
- Recursion In Detail
- How to Implement LRU Cache
- How to Find Prime Number Efficiently
- How to Calculate Minimium Edit Distance
- How to use Binary Search
- How to efficiently solve Trapping Rain Water Problem
- How to Remove Duplicates From Sorted Array
- How to Find Longest Palindromic Substring
- How to Reverse Linked List in K Group
- How to Check the Validation of Parenthesis
- How to Find Missing Element
- How to Find Duplicates and Missing Elements
- How to Check Palindromic LinkedList
- How to Pick Elements From an Infinite Arbitrary Sequence
- How to Schedule Seats for Students
- Union-Find Algorithm in Detail
- Union-Find Application
- Problems that can be solved in one line
- Find Subsequence With Binary Search
- Difference Between Process and Thread in Linux
- You Must Know About Linux Shell
- You Must Know About Cookie and Session
- Cryptology Algorithm
- Some Good Online Practice Platforms
Algorithms:
- 100 days of algorithms
- Algorithms — Solved algorithms and data structures problems in many languages.
- Algorithms by Jeff Erickson (Code) (HN)
- Top algos/DS to learn
- Some neat algorithms
- Mathematical Proof of Algorithm Correctness and Efficiency (2019)
- Algorithm Visualizer — Interactive online platform that visualizes algorithms from code.
- Algorithms for Optimization book
- Collaborative book on algorithms (Code)
- Algorithms in C by Robert Sedgewick
- Algorithm Design Manual
- MIT Introduction to Algorithms course (2011)
- How to implement an algorithm from a scientific paper (2012)
- Quadsort — Stable non-recursive merge sort named quadsort.
- System design algorithms — Algorithms you should know before system design.
- Algorithms Design book
- Think Complexity
- All Algorithms implemented in Rust
- Solutions to Introduction to Algorithms book (Code)
- Maze Algorithms (2011) (HN)
- Algorithmic Design Paradigms book (Code)
- Words and buttons Online Blog (Code)
- Algorithms animated
- Cache Oblivious Algorithms (2020) (HN)
- You could have invented fractional cascading (2012)
- Guide to learning algorithms through LeetCode (Code) (HN)
- How hard is unshuffling a string?
- Optimization Algorithms on Matrix Manifolds
- Problem Solving with Algorithms and Data Structures (HN) (PDF)
- Algorithms implemented in Python
- Algorithms implemented in JavaScript
- Algorithms & Data Structures in Java
- Wolfsort — Stable adaptive hybrid radix / merge sort.
- Evolutionary Computation Bestiary — Bestiary of evolutionary, swarm and other metaphor-based algorithms.
- Elements of Programming book — Decomposing programs into a system of algorithmic components. (Review) (HN) (Lobsters)
- Competitive Programming Algorithms
- CPP/C — C/C++ algorithms/DS problems.
- How to design an algorithm (2018)
- CSE 373 — Introduction to Algorithms, by Steven Skiena (2020)
- Computer Algorithms II course (2020)
- Improving Binary Search by Guessing (2019)
- The case for a learned sorting algorithm (2020) (HN)
- Elementary Algorithms — Introduces elementary algorithms and data structures. Includes side-by-side comparisons of purely functional realization and their imperative counterpart.
- Combinatorics Algorithms for Coding Interviews (2018)
- Algorithms written in different programming languages (Web)
- Solving the Sequence Alignment problem in Python (2020)
- The Sound of Sorting — Visualization and “Audibilization” of Sorting Algorithms. (Web)
- Miniselect: Practical and Generic Selection Algorithms (2020)
- The Slowest Quicksort (2019)
- Functional Algorithm Design (2020)
- Algorithms To Live By — Book Notes
- Numerical Algorithms (2015)
- Using approximate nearest neighbor search in real world applications (2020)
- In search of the fastest concurrent Union-Find algorithm (2019)
- Computer Science 521 Advanced Algorithm Design
- Data Structures and Algorithms implementation in Go
- Which algorithms/data structures should I “recognize” and know by name?
- Dictionary of Algorithms and Data Structures
- Phil’s Data Structure Zoo
- The Periodic Table of Data Structures (HN)
- Data Structure Visualizations (HN)
- Data structures to name-drop when you want to sound smart in an interview
- On lists, cache, algorithms, and microarchitecture (2019)
- Topics in Advanced Data Structures (2019) (HN)
- CS166 Advanced DS Course (2019)
- Advanced Data Structures (2017) (HN)
- Write a hash table in C
- Python Data Structures and Algorithms
- HAMTs from Scratch (2018)
- JavaScript Data Structures and Algorithms
- Implementing a Key-Value Store series
- Open Data Structures — Provide a high-quality open content data structures textbook that is both mathematically rigorous and provides complete implementations. (Code)
- A new analysis of the false positive rate of a Bloom filter (2009)
- Ideal Hash Trees
- RRB-Trees: Efficient Immutable Vectors
- Some data structures and algorithms written in OCaml
- Let’s Invent B(+)-Trees (HN)
- Anna — Low-latency, cloud-native KVS.
- Persistent data structures thanks to recursive type aliases (2019)
- Log-Structured Merge-Trees (2020)
- Bloom Filters for the Perplexed (2017)
- Understanding Bloom Filters (2020)
- Dense vs. Sparse Indexes (2020)
- Data Structures and Algorithms Problems
- Data Structures & Algorithms I Actually Used Working at Tech Companies (2020) (Lobsters) (HN)
- Let’s implement a Bloom Filter (2020) (HN)
- Data Structures Part 1: Bulk Data (2019) (Lobsters)
- Data Structures Explained
- Introduction to Cache-Oblivious Data Structures (2018)
- The Daily Coding newsletter — Master JavaScript and Data Structures.
- Lectures Note for Data Structures and Algorithms (2019)
- Mechanically Deriving Binary Tree Iterators with Continuation Defunctionalization (2020)
- Segment Tree data structure
- Structure of a binary state tree (2020)
- Introductory data structures and algorithms
- Applying Textbook Data Structures for Real Life Wins (2020) (HN)
- Michael Scott — Nonblocking data structures lectures (2020) — Nonblocking concurrent data structures are an increasingly valuable tool for shared-memory parallel programming.
- Scal — High-performance multicore-scalable data structures and benchmarks. (Web)
- Hyperbolic embedding implementations
- Morphisms of Computational Constructs — Visual catalogue + story of morphisms displayed across computational structures.
- What is key-value store? (build-your-own-x) (2020)
- Lesser Known but Useful Data Structures
- Using Bloom filters to efficiently synchronize hash graphs (2020)
- Bloom Filters by Example (Code)
- Binary Decision Diagrams (HN)
- 3 Steps to Designing Better Data Structures (2020)
- Sparse Matrices (2019) (HN)
- Algorithms & Data Structures in C++
- Fancy Tree Traversals (2019)
- The Robson Tree Traversal (2019)
- Data structures and program structures
- cdb — Fast, reliable, simple package for creating and reading constant databases.
- PGM-index — Learned indexes that match B-tree performance with 83x less space. (HN) (Code)
- Structural and pure attributes
- Cache-Tries: O(1) Concurrent Lock-Free Hash Tries (2018)
By Bryan Guner on June 4, 2021.
Exported from Medium on August 24, 2021.
Data Structures… Under The Hood
Data Structures Reference
Data Structures… Under The Hood
Data Structures Reference
Array
Stores things in order. Has quick lookups by index.
Linked List
Also stores things in order. Faster insertions and deletions than
arrays, but slower lookups (you have to “walk down” the whole list).
!
Queue
Like the line outside a busy restaurant. “First come, first served.”
Stack
Like a stack of dirty plates in the sink. The first one you take off the
top is the last one you put down.
Tree
Good for storing hierarchies. Each node can have “child” nodes.
Binary Search Tree
Everything in the left subtree is smaller than the current node,
everything in the right subtree is larger. lookups, but only if the tree
is balanced!
Binary Search Tree
Graph
Good for storing networks, geography, social relationships, etc.
Heap
A binary tree where the smallest value is always at the top. Use it to implement a priority queue.
![A binary heap is a binary tree where the nodes are organized to so that the smallest value is always at the top.]
Adjacency list
A list where the index represents the node and the value at that index is a list of the node’s neighbors:
graph = [ [1], [0, 2, 3], [1, 3], [1, 2], ]
Since node 3 has edges to nodes 1 and 2, graph[3] has the adjacency list [1, 2].
We could also use a dictionary where the keys represent the node and the values are the lists of neighbors.
graph = { 0: [1], 1: [0, 2, 3], 2: [1, 3], 3: [1, 2], }
This would be useful if the nodes were represented by strings, objects, or otherwise didn’t map cleanly to list indices.
Adjacency matrix
A matrix of 0s and 1s indicating whether node x connects to node y (0 means no, 1 means yes).
graph = [ [0, 1, 0, 0], [1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0], ]
Since node 3 has edges to nodes 1 and 2, graph[3][1] and graph[3][2] have value 1.
a = LinkedListNode(5) b = LinkedListNode(1) c = LinkedListNode(9) a.next = b b.next = c
Arrays
Ok, so we know how to store individual numbers. Let’s talk about storing several numbers.
That’s right, things are starting to heat up.
Suppose we wanted to keep a count of how many bottles of kombucha we drink every day.
Let’s store each day’s kombucha count in an 8-bit, fixed-width, unsigned integer. That should be plenty — we’re not likely to get through more than 256 (2⁸) bottles in a single day, right?
And let’s store the kombucha counts right next to each other in RAM, starting at memory address 0:
Bam. That’s an array. RAM is basically an array already.
Just like with RAM, the elements of an array are numbered. We call that number the index of the array element (plural: indices). In this example, each array element’s index is the same as its address in RAM.
But that’s not usually true. Suppose another program like Spotify had already stored some information at memory address 2:
We’d have to start our array below it, for example at memory address 3. So index 0 in our array would be at memory address 3, and index 1 would be at memory address 4, etc.:
Suppose we wanted to get the kombucha count at index 4 in our array. How do we figure out what address in memory to go to? Simple math:
Take the array’s starting address (3), add the index we’re looking for (4), and that’s the address of the item we’re looking for. 3 + 4 = 7. In general, for getting the nth item in our array:
\text{address of nth item in array} = \text{address of array start} + n
This works out nicely because the size of the addressed memory slots and the size of each kombucha count are both 1 byte. So a slot in our array corresponds to a slot in RAM.
But that’s not always the case. In fact, it’s usually not the case. We usually use 64-bit integers.
So how do we build an array of 64-bit (8 byte) integers on top of our 8-bit (1 byte) memory slots?
We simply give each array index 8 address slots instead of 1:
So we can still use simple math to grab the start of the nth item in our array — just gotta throw in some multiplication:
\text{address of nth item in array} = \text{address of array start} + (n * \text{size of each item in bytes})
Don’t worry — adding this multiplication doesn’t really slow us down. Remember: addition, subtraction, multiplication, and division of fixed-width integers takes time. So all the math we’re using here to get the address of the nth item in the array takes time.
And remember how we said the memory controller has a direct connection to each slot in RAM? That means we can read the stuff at any given memory address in time.
Together, this means looking up the contents of a given array index is time. This fast lookup capability is the most important property of arrays.
But the formula we used to get the address of the nth item in our array only works if:
- Each item in the array is the same size (takes up the same
number of bytes).
- The array is uninterrupted (contiguous) in memory. There can’t
be any gaps in the array…like to “skip over” a memory slot Spotify was already using.
These things make our formula for finding the nth item work because they make our array predictable. We can predict exactly where in memory the nth element of our array will be.
But they also constrain what kinds of things we can put in an array. Every item has to be the same size. And if our array is going to store a lot of stuff, we’ll need a bunch of uninterrupted free space in RAM. Which gets hard when most of our RAM is already occupied by other programs (like Spotify).
That’s the tradeoff. Arrays have fast lookups ( time), but each item in the array needs to be the same size, and you need a big block of uninterrupted free memory to store the array.
## Pointers
Remember how we said every item in an array had to be the same size? Let’s dig into that a little more.
Suppose we wanted to store a bunch of ideas for baby names. Because we’ve got some really cute ones.
Each name is a string. Which is really an array. And now we want to store those arrays in an array. Whoa.
Now, what if our baby names have different lengths? That’d violate our rule that all the items in an array need to be the same size!
We could put our baby names in arbitrarily large arrays (say, 13 characters each), and just use a special character to mark the end of the string within each array…
“Wigglesworth” is a cute baby name, right?
But look at all that wasted space after “Bill”. And what if we wanted to store a string that was more than 13 characters? We’d be out of luck.
There’s a better way. Instead of storing the strings right inside our array, let’s just put the strings wherever we can fit them in memory. Then we’ll have each element in our array hold the address in memory of its corresponding string. Each address is an integer, so really our outer array is just an array of integers. We can call each of these integers a pointer, since it points to another spot in memory.
The pointers are marked with a * at the beginning.
Pretty clever, right? This fixes both the disadvantages of arrays:
- The items don’t have to be the same length — each string can be as
long or as short as we want.
- We don’t need enough uninterrupted free memory to store all our
strings next to each other — we can place each of them separately, wherever there’s space in RAM.
We fixed it! No more tradeoffs. Right?
Nope. Now we have a new tradeoff:
Remember how the memory controller sends the contents of nearby memory addresses to the processor with each read? And the processor caches them? So reading sequential addresses in RAM is faster because we can get most of those reads right from the cache?
Our original array was very cache-friendly, because everything was sequential. So reading from the 0th index, then the 1st index, then the 2nd, etc. got an extra speedup from the processor cache.
But the pointers in this array make it not cache-friendly, because the baby names are scattered randomly around RAM. So reading from the 0th index, then the 1st index, etc. doesn’t get that extra speedup from the cache.
That’s the tradeoff. This pointer-based array requires less uninterrupted memory and can accommodate elements that aren’t all the same size, but it’s slower because it’s not cache-friendly.
This slowdown isn’t reflected in the big O time cost. Lookups in this pointer-based array are still time.
Linked lists
Our word processor is definitely going to need fast appends — appending to the document is like the main thing you do with a word processor.
Can we build a data structure that can store a string, has fast appends, and doesn’t require you to say how long the string will be ahead of time?
Let’s focus first on not having to know the length of our string ahead of time. Remember how we used pointers to get around length issues with our array of baby names?
What if we pushed that idea even further?
What if each character in our string were a two-index array with:
- the character itself 2. a pointer to the next character
We would call each of these two-item arrays a node and we’d call this series of nodes a linked list.
Here’s how we’d actually implement it in memory:
Notice how we’re free to store our nodes wherever we can find two open slots in memory. They don’t have to be next to each other. They don’t even have to be in order:
“But that’s not cache-friendly, “ you may be thinking. Good point! We’ll get to that.
The first node of a linked list is called the head, and the last node is usually called the tail.
Confusingly, some people prefer to use “tail” to refer to everything after the head of a linked list. In an interview it’s fine to use either definition. Briefly say which definition you’re using, just to be clear.
It’s important to have a pointer variable referencing the head of the list — otherwise we’d be unable to find our way back to the start of the list!
We’ll also sometimes keep a pointer to the tail. That comes in handy when we want to add something new to the end of the linked list. In fact, let’s try that out:
Suppose we had the string “LOG” stored in a linked list:
Suppose we wanted to add an “S” to the end, to make it “LOGS”. How would we do that?
Easy. We just put it in a new node:
1. Grab the last letter, which is “G”. Our tail pointer lets us do this in time.
2. Point the last letter’s next to the letter we’re appending (“S”).
3. Update the tail pointer to point to our new last letter, “S”.
Why is it time? Because the runtime doesn’t get bigger if the string gets bigger. No matter how many characters are in our string, we still just have to tweak a couple pointers for any append.
Now, what if instead of a linked list, our string had been a dynamic array? We might not have any room at the end, forcing us to do one of those doubling operations to make space:
So with a dynamic array, our append would have a worst-case time cost of .
Linked lists have worst-case -time appends, which is better than the worst-case time of dynamic arrays.
That worst-case part is important. The average case runtime for appends to linked lists and dynamic arrays is the same: .
Now, what if we wanted to *pre*pend something to our string? Let’s say we wanted to put a “B” at the beginning.
For our linked list, it’s just as easy as appending. Create the node:
- Point “B”’s next to “L”. 2. Point the head to “B”.
But if our string were a dynamic array…
And we wanted to add in that “B”:
Eep. We have to make room for the “B”!
We have to move each character one space down:
Now we can drop the “B” in there:
It’s all in the step where we made room for the first letter. We had to move all n characters in our string. One at a time. That’s time.
So linked lists have faster *pre*pends ( time) than dynamic arrays ( time).
No “worst case” caveat this time — prepends for dynamic arrays are always time. And prepends for linked lists are always time.
These quick appends and prepends for linked lists come from the fact that linked list nodes can go anywhere in memory. They don’t have to sit right next to each other the way items in an array do.
So if linked lists are so great, why do we usually store strings in an array? Because arrays have -time lookups. And those constant-time lookups come from the fact that all the array elements are lined up next to each other in memory.
Lookups with a linked list are more of a process, because we have no way of knowing where the ith node is in memory. So we have to walk through the linked list node by node, counting as we go, until we hit the ith item.
def get_ith_item_in_linked_list(head, i): if i < 0: raise ValueError(“i can’t be negative: %d” % i) current_node = head current_position = 0 while current_node: if current_position == i: # Found it! return current_node # Move on to the next node current_node = current_node.next current_position += 1 raise ValueError(‘List has fewer than i + 1 (%d) nodes’ % (i + 1))
That’s i + 1 steps down our linked list to get to the ith node (we made our function zero-based to match indices in arrays). So linked lists have -time lookups. Much slower than the -time lookups for arrays and dynamic arrays.
Not only that — walking down a linked list is not cache-friendly. Because the next node could be anywhere in memory, we don’t get any benefit from the processor cache. This means lookups in a linked list are even slower.
So the tradeoff with linked lists is they have faster prepends and faster appends than dynamic arrays, but they have slower lookups.
## Doubly Linked Lists
In a basic linked list, each item stores a single pointer to the next element.
In a doubly linked list, items have pointers to the next and the previous nodes.
Doubly linked lists allow us to traverse our list backwards. In a singly linked list, if you just had a pointer to a node in the middle of a list, there would be no way to know what nodes came before it. Not a problem in a doubly linked list.
Not cache-friendly
Most computers have caching systems that make reading from sequential addresses in memory faster than reading from scattered addresses.
Array items are always located right next to each other in computer memory, but linked list nodes can be scattered all over.
So iterating through a linked list is usually quite a bit slower than iterating through the items in an array, even though they’re both theoretically time.
Hash tables
Quick lookups are often really important. For that reason, we tend to use arrays (-time lookups) much more often than linked lists (-time lookups).
For example, suppose we wanted to count how many times each ASCII character appears in Romeo and Juliet. How would we store those counts?
We can use arrays in a clever way here. Remember — characters are just numbers. In ASCII (a common character encoding) ‘A’ is 65, ‘B’ is 66, etc.
So we can use the character(‘s number value) as the index in our array, and store the count for that character at that index in the array:
With this array, we can look up (and edit) the count for any character in constant time. Because we can access any index in our array in constant time.
Something interesting is happening here — this array isn’t just a list of values. This array is storing two things: characters and counts. The characters are implied by the indices.
So we can think of an array as a table with two columns…except you don’t really get to pick the values in one column (the indices) — they’re always 0, 1, 2, 3, etc.
But what if we wanted to put any value in that column and still get quick lookups?
Suppose we wanted to count the number of times each word appears in Romeo and Juliet. Can we adapt our array?
Translating a character into an array index was easy. But we’ll have to do something more clever to translate a word (a string) into an array index…
Here’s one way we could do it:
Grab the number value for each character and add those up.
The result is 429. But what if we only have 30 slots in our array? We’ll use a common trick for forcing a number into a specific range: the modulus operator (%). Modding our sum by 30 ensures we get a whole number that’s less than 30 (and at least 0):
429 \: \% \: 30 = 9
Bam. That’ll get us from a word (or any string) to an array index.
This data structure is called a hash table or hash map. In our hash table, the counts are the values and the words (“lies, “ etc.) are the keys (analogous to the indices in an array). The process we used to translate a key into an array index is called a hashing function.
![A blank array except for a 20, labeled as the value, stored at index
- To the left the array is the word “lies,” labeled as the key, with an
arrow pointing to the right at diamond with a question mark in the middle, labeled as the hashing function. The diamond points to the 9th index of the array.](https://www.interviewcake.com/images/svgs/cs_for_hackers\_\_hash_tables_lies_key_labeled.svg?bust=209)
The hashing functions used in modern systems get pretty complicated — the one we used here is a simplified example.
Note that our quick lookups are only in one direction — we can quickly get the value for a given key, but the only way to get the key for a given value is to walk through all the values and keys.
Same thing with arrays — we can quickly look up the value at a given index, but the only way to figure out the index for a given value is to walk through the whole array.
One problem — what if two keys hash to the same index in our array? Look at “lies” and “foes”:
They both sum up to 429! So of course they’ll have the same answer when we mod by 30:
429 \: \% \: 30 = 9
So our hashing function gives us the same answer for “lies” and “foes.” This is called a hash collision. There are a few different strategies for dealing with them.
Here’s a common one: instead of storing the actual values in our array, let’s have each array slot hold a pointer to a linked list holding the counts for all the words that hash to that index:
One problem — how do we know which count is for “lies” and which is for “foes”? To fix this, we’ll store the word as well as the count in each linked list node:
“But wait!” you may be thinking, “Now lookups in our hash table take time in the worst case, since we have to walk down a linked list.” That’s true! You could even say that in the worst case every key creates a hash collision, so our whole hash table degrades to a linked list.
In industry though, we usually wave our hands and say collisions are rare enough that on average lookups in a hash table are time. And there are fancy algorithms that keep the number of collisions low and keep the lengths of our linked lists nice and short.
But that’s sort of the tradeoff with hash tables. You get fast lookups by key…except some lookups could be slow. And of course, you only get those fast lookups in one direction — looking up the key for a given value still takes time
Breadth-First Search (BFS) and Breadth-First Traversal
Breadth-first search (BFS) is a method for exploring a tree or graph. In a BFS, you first explore all the nodes one step away, then all the nodes two steps away, etc.
Breadth-first search is like throwing a stone in the center of a pond. The nodes you explore “ripple out” from the starting point.
Here’s a how a BFS would traverse this tree, starting with the root:
We’d visit all the immediate children (all the nodes that’re one step away from our starting node):
Then we’d move on to all those nodes’ children (all the nodes that’re two steps away from our starting node):
Breadth-first search is often compared with depth-first search.
Advantages:
- A BFS will find the shortest path between the starting point and
any other reachable node. A depth-first search will not necessarily find the shortest path.
Disadvantages
- A BFS on a binary tree generally requires more memory than a DFS.
A binary tree is a tree where <==(every node has two or fewer children)==>.
The children are usually called left and right.
class BinaryTreeNode(object):
This lets us build a structure like this:
That particular example is special because every level of the tree is completely full. There are no “gaps.” We call this kind of tree “perfect.”
Binary trees have a few interesting properties when they’re perfect:
Property 1: the number of total nodes on each “level” doubles as we move down the tree.
Property 2: the number of nodes on the last level is equal to the sum of the number of nodes on all other levels (plus 1). In other words, about half of our nodes are on the last level.
<==(**Let’s call the number of nodes n, **)==>
<==(_and the height of the tree h. _)==>
h can also be thought of as the “number of levels.”
If we had h, how could we calculate n?
Let’s just add up the number of nodes on each level!
If we zero-index the levels, the number of nodes on the xth level is exactly 2^x.
- Level 0: 2⁰ nodes,
- 2. Level 1: 2¹ nodes,
- 3. Level 2: 2² nodes,
- 4. Level 3: 2³ nodes,
- 5. etc
So our total number of nodes is:
n = 2⁰ + 2¹ + 2² + 2³ + … + 2^{h-1}
Why only up to 2^{h-1}?
Notice that we started counting our levels at 0.
- So if we have h levels in total,
- the last level is actually the “h-1”-th level.
- That means the number of nodes on the last level is 2^{h-1}.
But we can simplify.
Property 2 tells us that the number of nodes on the last level is (1 more than) half of the total number of nodes,
so we can just take the number of nodes on the last level, multiply it by 2, and subtract 1 to get the number of nodes overall.
- We know the number of nodes on the last level is 2^{h-1},
- So:
n = 2^{h-1} * 2–1
n = 2^{h-1} * 2¹ — 1
n = 2^{h-1+1}- 1
n = 2^{h} — 1
So that’s how we can go from h to n. What about the other direction?
We need to bring the h down from the exponent.
That’s what logs are for!
First, some quick review.
<==(log_{10} (100) )==>
simply means,
“What power must you raise 10 to in order to get 100?”.
Which is 2,
because .
<==(10² = 100 )==>
Graph Data Structure: Directed, Acyclic, etc
Graph =====
Binary numbers
Let’s put those bits to use. Let’s store some stuff. Starting with numbers.
The number system we usually use (the one you probably learned in elementary school) is called base 10, because each digit has ten possible values (1, 2, 3, 4, 5, 6, 7, 8, 9, and 0).
But computers don’t have digits with ten possible values. They have bits with two possible values. So they use base 2 numbers.
Base 10 is also called decimal. Base 2 is also called binary.
To understand binary, let’s take a closer look at how decimal numbers work. Take the number “101” in decimal:
Notice we have two “1”s here, but they don’t mean the same thing. The leftmost “1” means 100, and the rightmost “1” means 1. That’s because the leftmost “1” is in the hundreds place, while the rightmost “1” is in the ones place. And the “0” between them is in the tens place.
So this “101” in base 10 is telling us we have “1 hundred, 0 tens, and 1 one.”
Notice how the places in base 10 (ones place, tens place, hundreds place, etc.) are sequential powers of 10:
- 10⁰=1 * 10¹=10 * 10²=100 * 10³=1000 * etc.
The places in binary (base 2) are sequential powers of 2:
- 2⁰=1 * 2¹=2 * 2²=4 * 2³=8 * etc.
So let’s take that same “101” but this time let’s read it as a binary number:
Reading this from right to left: we have a 1 in the ones place, a 0 in the twos place, and a 1 in the fours place. So our total is 4 + 0 + 1 which is 5.
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
By Bryan Guner on June 4, 2021.
Exported from Medium on August 24, 2021.
Deploy React App To Heroku Using Postgres & Express
Heroku is an web application that makes deploying applications easy for a beginner.
Deploy React App To Heroku Using Postgres & Express
Heroku is an web application that makes deploying applications easy for a beginner.
Before you begin deploying, make sure to remove any console.log
's or debugger
's in any production code. You can search your entire project folder if you are using them anywhere.
You will set up Heroku to run on a production, not development, version of your application. When a Node.js application like yours is pushed up to Heroku, it is identified as a Node.js application because of the package.json
file. It runs npm install
automatically. Then, if there is a heroku-postbuild
script in the package.json
file, it will run that script. Afterwards, it will automatically run npm start
.
In the following phases, you will configure your application to work in production, not just in development, and configure the package.json
scripts for install
, heroku-postbuild
and start
scripts to install, build your React application, and start the Express production server.
Phase 1: Heroku Connection
If you haven’t created a Heroku account yet, create one here.
Add a new application in your Heroku dashboard named whatever you want. Under the “Resources” tab in your new application, click “Find more add-ons” and add the “Heroku Postgres” add-on with the free Hobby Dev setting.
In your terminal, install the Heroku CLI. Afterwards, login to Heroku in your terminal by running the following:
heroku login
Add Heroku as a remote to your project’s git repository in the following command and replace `` with the name of the application you created in the Heroku dashboard.
heroku git:remote -a
Next, you will set up your Express + React application to be deployable to Heroku.
Phase 2: Setting up your Express + React application
Right now, your React application is on a different localhost port than your Express application. However, since your React application only consists of static files that don’t need to bundled continuously with changes in production, your Express application can serve the React assets in production too. These static files live in the frontend/build
folder after running npm run build
in the frontend
folder.
Add the following changes into your backend/routes.index.js
file.
At the root route, serve the React application’s static index.html
file along with XSRF-TOKEN
cookie. Then serve up all the React application's static files using the express.static
middleware. Serve the index.html
and set the XSRF-TOKEN
cookie again on all routes that don't start in /api
. You should already have this set up in backend/routes/index.js
which should now look like this:
// backend/routes/index.js
const express = require('express');
const router = express.Router();
const apiRouter = require('./api');
router.use('/api', apiRouter);
// Static routes
// Serve React build files in production
if (process.env.NODE_ENV === 'production') {
const path = require('path');
// Serve the frontend's index.html file at the root route
router.get('/', (req, res) => {
res.cookie('XSRF-TOKEN', req.csrfToken());
res.sendFile(
path.resolve(__dirname, '../../frontend', 'build', 'index.html')
);
});
// Serve the static assets in the frontend's build folder
router.use(express.static(path.resolve("../frontend/build")));
// Serve the frontend's index.html file at all other routes NOT starting with /api
router.get(/^(?!\/?api).*/, (req, res) => {
res.cookie('XSRF-TOKEN', req.csrfToken());
res.sendFile(
path.resolve(__dirname, '../../frontend', 'build', 'index.html')
);
});
}
// Add a XSRF-TOKEN cookie in development
if (process.env.NODE_ENV !== 'production') {
router.get('/api/csrf/restore', (req, res) => {
res.cookie('XSRF-TOKEN', req.csrfToken());
res.status(201).json({});
});
}
module.exports = router;
Your Express backend’s package.json
should include scripts to run the sequelize
CLI commands.
The backend/package.json
's scripts should now look like this:
"scripts": {
"sequelize": "sequelize",
"sequelize-cli": "sequelize-cli",
"start": "per-env",
"start:development": "nodemon -r dotenv/config ./bin/www",
"start:production": "node ./bin/www"
},
Initialize a package.json
file at the very root of your project directory (outside of both the backend
and frontend
folders). The scripts defined in this package.json
file will be run by Heroku, not the scripts defined in the backend/package.json
or the frontend/package.json
.
When Heroku runs npm install
, it should install packages for both the backend
and the frontend
. Overwrite the install
script in the root package.json
with:
npm --prefix backend install backend && npm --prefix frontend install frontend
This will run npm install
in the backend
folder then run npm install
in the frontend
folder.
Next, define a heroku-postbuild
script that will run the npm run build
command in the frontend
folder. Remember, Heroku will automatically run this script after running npm install
.
Define a sequelize
script that will run npm run sequelize
in the backend
folder.
Finally, define a start
that will run npm start
in the `backend folder.
The root package.json
's scripts should look like this:
"scripts": {
"heroku-postbuild": "npm run build --prefix frontend",
"install": "npm --prefix backend install backend && npm --prefix frontend install frontend",
"dev:backend": "npm install --prefix backend start",
"dev:frontend": "npm install --prefix frontend start",
"sequelize": "npm run --prefix backend sequelize",
"sequelize-cli": "npm run --prefix backend sequelize-cli",
"start": "npm start --prefix backend"
},
The dev:backend
and dev:frontend
scripts are optional and will not be used for Heroku.
Finally, commit your changes.
Phase 3: Deploy to Heroku
Once you’re finished setting this up, navigate to your application’s Heroku dashboard. Under “Settings” there is a section for “Config Vars”. Click the Reveal Config Vars
button to see all your production environment variables. You should have a DATABASE_URL
environment variable already from the Heroku Postgres add-on.
Add environment variables for JWT_EXPIRES_IN
and JWT_SECRET
and any other environment variables you need for production.
You can also set environment variables through the Heroku CLI you installed earlier in your terminal. See the docs for Setting Heroku Config Variables.
Push your project to Heroku. Heroku only allows the master
branch to be pushed. But, you can alias your branch to be named master
when pushing to Heroku. For example, to push a branch called login-branch
to master
run:
git push heroku login-branch:master
If you do want to push the master
branch, just run:
git push heroku master
You may want to make two applications on Heroku, the master
branch site that should have working code only. And your staging
site that you can use to test your work in progress code.
Now you need to migrate and seed your production database.
Using the Heroku CLI, you can run commands inside of your production application just like in development using the heroku run
command.
For example to migrate the production database, run:
heroku run npm run sequelize db:migrate
To seed the production database, run:
heroku run npm run sequelize db:seed:all
Note: You can interact with your database this way as you’d like, but beware that db:drop
cannot be run in the Heroku environment. If you want to drop and create the database, you need to remove and add back the "Heroku Postgres" add-on.
Another way to interact with the production application is by opening a bash shell through your terminal by running:
heroku bash
In the opened shell, you can run things like npm run sequelize db:migrate
.
Open your deployed site and check to see if you successfully deployed your Express + React application to Heroku!
If you see an Application Error
or are experiencing different behavior than what you see in your local environment, check the logs by running:
heroku logs
If you want to open a connection to the logs to continuously output to your terminal, then run:
heroku logs --tail
The logs may clue you into why you are experiencing errors or different behavior.
If you found this guide helpful feel free to checkout my github/gists where I host similar content:
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Alternate Instructions:
Deploy MERN App To Heroku:
Source: Article
Discover More:
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
By Bryan Guner on March 5, 2021.
Exported from Medium on August 24, 2021.
Emmet Cheat Sheet
EMMET
Emmet Cheat Sheet
EMMET
The a toolkit for web-developers
Introduction
Emmet is a productivity toolkit for web developers that uses expressions to generate HTML snippets.
Installation
Normally, installation for Emmet should be a straight-forward process from the package-manager, as most of the modern text editors support Emmet.
Usage
You can use Emmet in two ways:
- Tab Expand Way: Type your emmet code and press
Tab
key - Interactive Method: Press
alt + ctrl + Enter
and start typing your expressions. This should automatically generate HTML snippets on the fly.
This cheatsheet will assume that you press Tab
after each expressions.
HTML
Generating HTML 5 DOCTYPE
html:5
Will generate
Document
Child items
Child items are created using >
ul>li>p
<ul>
<li>
<p></p>
</li>
</ul>
Sibling Items
Sibling items are created using +
html>head+body
Multiplication
Items can be multiplied by *
ul>li*5
<ul>
<li>
<li>
<li>
<li>
<li>
</ul>
Grouping
Items can be grouped together using ()
table>(tr>th*5)+tr>t*5
<table>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
<tr>
</tr>
</table>
Class and ID
Class and Id in Emmet can be done using .
and #
div.heading
div#heading
ID and Class can also be combined together
div#heading.center
Adding Content inside tags
Contents inside tags can be added using {}
h1{Emmet is awesome}+h2{Every front end developers should use this}+p{This is paragraph}*2
<h1>Emmet is awesome</h1>
<h2>Every front end developers should use this</h2>
<p>This is paragraph</p>
<p>This is paragraph</p>
Attributes inside HTML tags
Attributes can be added using []
a[href=https://?google.com data-toggle=something target=_blank]
<a href="https://?google.com"></a>
Numbering
Numbering can be done using $
You can use this inside tag or contents.
h${This is so awesome $}*6
<h1>This is so awesome 1</h1>
<h2>This is so awesome 2</h2>
<h3>This is so awesome 3</h3>
<h4>This is so awesome 4</h4>
<h5>This is so awesome 5</h5>
<h6>This is so awesome 6</h6>
Use @-
to reverse the Numbering
img[src=image$$@-.jpg]*5
<img src="image05.jpg" alt="">
<img src="image04.jpg" alt="">
<img src="image03.jpg" alt="">
<img src="image02.jpg" alt="">
<img src="image01.jpg" alt="">
To start the numbering from specific number, use this way
img[src=emmet$@100.jpg]*5
<img src="emmet100.jpg" alt="">
<img src="emmet101.jpg" alt="">
<img src="emmet102.jpg" alt="">
<img src="emmet103.jpg" alt="">
<img src="emmet104.jpg" alt="">
Tips
- Use
:
to expand known abbreviations
input:date
form:post
link:css
- Building Navbar
.navbar>ul>li*3>a[href=#]{Item $@-}
<ul>
<li><a href="#">Item 3</a></li>
<li><a href="#">Item 2</a></li>
<li><a href="#">Item 1</a></li>
</ul>
CSS
Emmet works surprisingly well with css as well.
-
f:l
float: left;
You can also use any options n/r/l
-
pos:a
position: absolute;
Also use any options, pos:a/r/f
d:n/b/f/i/ib
d:ib
display: inline-block;
- You can use
m
for margin andp
for padding followed by direction
mr
-> margin-right
pr
-> padding-right
-
@f
will result in@font-face {
font-family:;
src:url();
}
You can also use these shorthands
#### If you found this guide helpful feel free to checkout my github/gists where I host similar content:
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Or Checkout my personal Resource Site:
a/A-Student-Resources
Edit descriptiongoofy-euclid-1cd736.netlify.app
By Bryan Guner on March 6, 2021.
Exported from Medium on August 24, 2021.
10 Essential React Interview Questions For Aspiring Frontend Developers
Comprehensive React Cheatsheet included at the bottom of this article!
10 Essential React Interview Questions For Aspiring Frontend Developers
Comprehensive React Cheatsheet included at the bottom of this article!
Resources:
Introduction to React for Complete Beginners
All of the code examples below will be included a second time at the bottom of this article as an embedded gist.javascript.plainenglish.io
Beginner’s Guide To React Part 2
As I learn to build web applications in React I will blog about it in this series in an attempt to capture the…bryanguner.medium.com
bgoonz/React_Notes_V3
A JavaScript library for building user interfaces Declarative React makes it painless to create interactive UIs. Design…github.com
Getting Started - React
A JavaScript library for building user interfacesreactjs.org
Also … here is my brand new blog site… built with react and a static site generator called GatsbyJS!
It’s a work in progress
https://bgoonz-blog.netlify.app/
### Beginning of the Article:
Pros
- Easy to learn
- HTML-like syntax allows templating and highly detailed documentation
- Supports server-side rendering
- Easy migrating between different versions of React
- Uses JavaScript rather than framework-specific code
Cons
- Poor documentation
- Limited to only view part of MVC
- New developers might see JSC as a barrier
Where to Use React
- For apps that have multiple events
- When your app development team excels in CSS, JavaScript and HTML
- You want to create sharable components on your app
- When you need a personalized app solution
Misconceptions about React
React is a framework:
Many developers and aspiring students misinterpret React to be a fully functional framework. It is because we often compare React with major frameworks such as Angular and Ember. This comparison is not to compare the best frameworks but to focus on the differences and similarities of React and Angular’s approach that makes their offerings worth studying. Angular works on the MVC model to support the Model, View, and Controller layers of an app. React focuses only on the ‘V,’ which is the view layer of an application and how to make handling it easier to integrate smoothly into a project.
React’s Virtual DOM is faster than DOM.
React uses a Virtual DOM, which is essentially a tree of JavaScript objects representing the actual browser DOM. The advantage of using this for the developers is that they don’t manipulate the DOM directly as developers do with jQuery when they write React apps. Instead, they would tell React how they want the DOM to make changes to the state object and allow React to make the necessary updates to the browser DOM. This helps create a comprehensive development model for developers as they don’t need to track all DOM changes. They can modify the state object, and React would use its algorithms to understand what part of UI changed compared to the previous DOM. Using this information updates the actual browser DOM. Virtual DOM provides an excellent API for creating UI and minimizes the update count to be made on the browser DOM.
However, it is not faster than the actual DOM. You just read that it needs to pull extra strings to figure out what part of UI needs to be updated before actually performing those updates. Hence, Virtual DOM is beneficial for many things, but it isn’t faster than DOM.
1. Explain how React uses a tree data structure called the virtual DOM to model the DOM
> The virtual DOM is a copy of the actual DOM tree. Updates in React are made to the virtual DOM. React uses a diffing algorithm to reconcile the changes and send the to the DOM to commit and paint.
2. Create virtual DOM nodes using JSX To create a React virtual DOM node using JSX, define HTML syntax in a JavaScript file.
> Here, the JavaScript hello variable is set to a React virtual DOM h1 element with the text “Hello World!”.
> You can also nest virtual DOM nodes in each other just like how you do it in HTML with the real DOM.
3. Use debugging tools to determine when a component is rendering
#### We use the React DevTools extension as an extension in our Browser DevTools to debug and view when a component is rendering
4. Describe how JSX transforms into actual DOM nodes
- To transfer JSX into DOM nodes, we use the ReactDOM.render method. It takes a React virtual DOM node’s changes allows Babel to transpile it and sends the JS changes to commit to the DOM.
5. Use the ReactDOM.render
method to have React render your virtual DOM nodes under an actual DOM node
6. Attach an event listener to an actual DOM node using a virtual node
The virtual DOM (VDOM) is a programming concept where an ideal, or “virtual”, representation of a UI is kept in memory and synced with the “real” DOM by a library such as ReactDOM. This process is called reconciliation.
This approach enables the declarative API of React: You tell React what state you want the UI to be in, and it makes sure the DOM matches that state. This abstracts out the attribute manipulation, event handling, and manual DOM updating that you would otherwise have to use to build your app.
Since “virtual DOM” is more of a pattern than a specific technology, people sometimes say it to mean different things. In React world, the term “virtual DOM” is usually associated with React elements since they are the objects representing the user interface. React, however, also uses internal objects called “fibers” to hold additional information about the component tree. They may also be considered a part of “virtual DOM” implementation in React.
Is the Shadow DOM the same as the Virtual DOM?
No, they are different. The Shadow DOM is a browser technology designed primarily for scoping variables and CSS in web components. The virtual DOM is a concept implemented by libraries in JavaScript on top of browser APIs.
- To add an event listener to an element, define a method to handle the event and associate that method with the element event you want to listen for:
7. Use create-react-app
to initialize a new React app and import required dependencies
- Create the default create-react-application by typing in our terminal
Explanation of npm vs npx from Free Code Camp:
npm (node package manager) is the dependency/package manager you get out of the box when you install Node.js. It provides a way for developers to install packages both globally and locally.
Sometimes you might want to take a look at a specific package and try out some commands. But you cannot do that without installing the dependencies in your local node_modules
folder.
npm the package manager
npm is a couple of things. First and foremost, it is an online repository for the publishing of open-source Node.js projects.
Second, it is a CLI tool that aids you to install those packages and manage their versions and dependencies. There are hundreds of thousands of Node.js libraries and applications on npm and many more are added every day.
npm by itself doesn’t run any packages. If you want to run a package using npm, you must specify that package in your package.json
file.
When executables are installed via npm packages, npm creates links to them:
- local installs have links created at the
./node_modules/.bin/
directory - global installs have links created from the global
bin/
directory (for example:/usr/local/bin
on Linux or at%AppData%/npm
on Windows)
To execute a package with npm you either have to type the local path, like this:
$ ./node_modules/.bin/your-package
or you can run a locally installed package by adding it into your package.json
file in the scripts section, like this:
{
"name": "your-application",
"version": "1.0.0",
"scripts": {
"your-package": "your-package"
}
}
Then you can run the script using npm run
:
npm run your-package
You can see that running a package with plain npm requires quite a bit of ceremony.
Fortunately, this is where npx comes in handy.
npx the package runner
Since npm version 5.2.0 npx is pre-bundled with npm. So it’s pretty much a standard nowadays.
npx is also a CLI tool whose purpose is to make it easy to install and manage dependencies hosted in the npm registry.
It’s now very easy to run any sort of Node.js-based executable that you would normally install via npm.
You can run the following command to see if it is already installed for your current npm version:
$ which npx
If it’s not, you can install it like this:
$ npm install -g npx
Once you make sure you have it installed, let’s see a few of the use cases that make npx extremely helpful.
Run a locally installed package easily
If you wish to execute a locally installed package, all you need to do is type:
$ npx your-package
npx will check whether or
exists in $PATH
, or in the local project binaries, and if so it will execute it.
Execute packages that are not previously installed
Another major advantage is the ability to execute a package that wasn’t previously installed.
Sometimes you just want to use some CLI tools but you don’t want to install them globally just to test them out. This means you can save some disk space and simply run them only when you need them. This also means your global variables will be less polluted.
> Now, where were we?
npx create-react-app --use-npm
- npx gives us the latest version.
--use-npm
just means to use npm instead of yarn or some other package manager
8. Pass props into a React component
props
is an object that gets passed down from the parent component to the child component. The values can be of any data structure including a function (which is an object)You can also interpolate values into JSX.
Set a variable to the string, “world”, and replace the string of “world” in the NavLinks JSX element with the variable wrapped in curly braces:
> Accessing props:
To access our props object in another component we pass it the props argument and React will invoke the functional component with the props object.
Reminder:
Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen.
Function and Class Components
The simplest way to define a component is to write a JavaScript function:
function Welcome(props) {
return <h1>Hello, {props.name}</h1>;
}
This function is a valid React component because it accepts a single “props” (which stands for properties) object argument with data and returns a React element. We call such components “function components” because they are literally JavaScript functions.
You can also use an ES6 class to define a component:
class Welcome extends React.Component {
render() {
return <h1>Hello, {this.props.name}</h1>;
}
}
The above two components are equivalent from React’s point of view.
- You can pass down as many props keys as you want.
9. Destructure props
> You can destructure the props object in the function component’s parameter.
10. Create routes using components from the react-router-dom package
a. Import the react-router-dom package:
npm i react-router-dom
> In your index.js:
- Above you import your BrowserRouter with which you can wrap your entire route hierarchy. This makes routing information from React Router available to all its descendent components.
- Then in the component of your choosing, usually top tier such as App.js, you can create your routes using the Route and Switch Components
Discover More:
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
REACT CHEAT SHEET:
More content at plainenglish.io
By Bryan Guner on June 11, 2021.
Exported from Medium on August 24, 2021.
Everything You Need To Become A Machine Learner
Part 1:
Everything You Need To Become A Machine Learner
Part 1:
This list of resources is specifically targeted at Web Developers and Data Scientists…. so do with it what you will…
> This list borrows heavily from multiple lists created by : sindresorhus
Machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.
> The goal of AI is to create computer models that exhibit “intelligent behaviors” like humans, according to Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. This means machines that can recognize a visual scene, understand a text written in natural language, or perform an action in the physical world.
> Machine learning is one way to use AI. It was defined in the 1950s by AI pioneer Arthur Samuel as “the field of study that gives computers the ability to learn without explicitly being programmed.”
- [📖] Delivering Happiness
- [📖] Good to Great: Why Some Companies Make the Leap…And Others Don’t
- [📖] Hello, Startup: A Programmer’s Guide to Building Products, Technologies, and Teams
- [📖] How Google Works
- [📖] Learn to Earn: A Beginner’s Guide to the Basics of Investing and Business
- [📖] Rework
- [📖] The Airbnb Story
- [📖] The Personal MBA
- [ ] Facebook: Digital marketing: get started
- [ ] Facebook: Digital marketing: go further
- [ ] Google Analytics for Beginners
- [ ] Moz: The Beginner’s Guide to SEO
- [ ] Smartly: Marketing Fundamentals
- [ ] Treehouse: SEO Basics
- [🅤]ꭏ App Monetization
- [🅤]ꭏ App Marketing
- [🅤]ꭏ How to Build a Startup
Natural language processing
Natural language processing is a field of machine learning in which machines learn to understand natural language as spoken and written by humans, instead of the data and numbers normally used to program computers. This allows machines to recognize language, understand it, and respond to it, as well as create new text and translate between languages. Natural language processing enables familiar technology like chatbots and digital assistants like Siri or Alexa.
Neural networks
Neural networks are a commonly used, specific class of machine learning algorithms. Artificial neural networks are modeled on the human brain, in which thousands or millions of processing nodes are interconnected and organized into layers.
In an artificial neural network, cells, or nodes, are connected, with each cell processing inputs and producing an output that is sent to other neurons. Labeled data moves through the nodes, or cells, with each cell performing a different function. In a neural network trained to identify whether a picture contains a cat or not, the different nodes would assess the information and arrive at an output that indicates whether a picture features a cat.
Be familiar with how Machine Learning is applied at other companies
Trillions of Questions, No Easy Answers: A (home) movie about how Google Search works — YouTube
- [📰] How Facebook uses super-efficient AI models to detect hate speech
- [📰] Recent Advances in Google Translate
- [📰] Cannes: HowMachine Learning saves us $1.7M a year on document previews
- [📰] Machine Learning @ Monzo in 2020
- [📰] How image search works at Dropbox
- [ ] Real-world AI Case Studies
- [ ] Andrej Karpathy on AI at Tesla (Full Stack Deep Learning — August 2018)
- [ ] Jai Ranganathan at Data Science at Uber (Full Stack Deep Learning — August 2018)
- [ ] John Apostolopoulos of Cisco discusses “Machine Learning in Networking”
0:48:44
- [ ] Joaquin Candela, Director of Applied Machine Learning , Facebook in conversation with Esteban Arcaute
0:52:27
- [ ] Eric Colson, Chief Algorithms Officer, Stitch Fix
0:53:57
- [ ] Claudia Perlich, Advisor to Dstillery and Adjunct Professor NYU Stern School of Business
0:51:59
- [ ] Jeff Dean, Google Senior Fellow and SVP Google AI — Deep Learning to Solve Challenging Problems
0:58:45
- [ ] James Parr, Director of Frontier Development Lab (NASA), FDL Europe & CEO, Trillium Technologies
0:55:46
- [ ] Daphne Koller, Founder & CEO of Insitro — In Conversation with Carlos Bustamante
0:49:29
- [ ] Eric Horvitz, Microsoft Research — AI in the Open World: Advances, Aspirations, and Rough Edges
0:56:11
- [ ] Tony Jebara, Netflix — Machine Learning for Recommendation and Personalization
0:55:20
- [💻] Analyzing Police Activity with pandas
- [💻] HR Analytics in Python: Predicting Employee Churn
- [💻] Predicting Customer Churn in Python
- [📺 ] How does YouTube recommend videos? — AI EXPLAINED!
0:33:53
- [📺 ] How does Google Translate’s AI work?
0:15:02
- [📺 ] Data Science in Finance
0:17:52
- [📺 ] The Age of AI
- [ ] How Far is Too Far? | The Age of A.I.
0:34:39
- [ ] Healed through A.I. | The Age of A.I.
0:39:55
- [ ] Using A.I. to build a better human | The Age of A.I.
0:44:27
- [ ] Love, art and stories: decoded | The Age of A.I.
0:38:57
- [ ] The ‘Space Architects’ of Mars | The Age of A.I.
0:30:10
- [ ] Will a robot take my job? | The Age of A.I.
0:36:14
- [ ] Saving the world one algorithm at a time | The Age of A.I.
0:46:37
- [ ] How A.I. is searching for Aliens | The Age of A.I.
0:36:12
- [📺 ] Using Intent Data to Optimize the Self-Solve Experience
- [📺 ] Trillions of Questions, No Easy Answers: A (home) movie about how Google Search works
- [📺 ] Google Machine Learning System Design Mock Interview
- [📺 ] Netflix Machine Learning Mock Interview: Type-ahead Search
- [📺 ] Machine Learning design: Search engine for Q&A
- [📺 ] Engineering Systems for Real-Time Predictions @DoorDash
- [📺 ] How Gmail Uses Iterative Design, Machine Learning and AI to Create More Assistive Features
- [📺 ] Wayfair Data Science Explains It All: Human-in-the-loop Systems
- [📺 ] Leaving the lab: Building NLP applications that real people can use
- [📺 ] Machine Learning at Uber (Natural Language Processing Use Cases)
- [📺 ] Google Wave: Natural Language Processing
- [📺 ] Natural Language Understanding in Alexa
- [📺 ] The Machine Learning Behind Alexa’s AI Systems
- [📺 ] Ines Montani Keynote — Applied NLP Thinking
- [📺 ] Lecture 9: Lukas Biewald
- [📺 ] Lecture 13: Research Directions
- [📺 ] Lecture 14: Jeremy Howard
- [📺 ] Lecture 15: Richard Socher
- [📺 ] Machine learning across industries with Vicki Boykis
0:34:02
- [📺 ] Rachael Tatman — Conversational A.I. and Linguistics
0:36:51
- [📺 ] Nicolas Koumchatzky — Machine Learning in Production for Self Driving Cars
0:44:56
- [📺 ] Brandon Rohrer — Machine Learning in Production for Robots
0:34:31
- [📺 ] [CVPR’21 WAD] Keynote — Andrej Karpathy, Tesla
Be able to frame anMachine Learning problem
· [ ] AWS: Types of Machine Learning Solutions
· [📰] Apply Machine Learning to your Business
· [📰] Resilience and Vibrancy: The 2020 Data & AI Landscape
· [📰] Software 2.0
· [📰] Highlights from ICML 2020
· [📰] A Peek at Trends in Machine Learning
· [📰] How to deliver on Machine Learning projects
· [📰] Data Science as a Product
· [📰] Customer service is full of machine learning problems
· [📰] Choosing Problems in Data Science and Machine Learning
· [📰] Why finance is deploying natural language processing
· [📰] The Last 5 Years In Deep Learning
· [📰] Always start with a stupid model, no exceptions.
· [📰] Most impactful AI trends of 2018: the rise ofMachine Learning Engineering
· [📰] Building machine learning products: a problem well-defined is a problem half-solved.
· [📰] Simple considerations for simple people building fancy neural networks
· [📰] Maximizing Business Impact with Machine Learning
· [📖] AI Superpowers: China, Silicon Valley, and the New World Order
· [📖] A Human’s Guide to Machine Intelligence
· [📖] The Future Computed
· [📖] Machine Learning Yearning by Andrew Ng
· [📖] Prediction Machines: The Simple Economics of Artificial Intelligence
· [📖] Building Machine Learning Powered Applications: Going from Idea to Product
· [ ] Coursera: AI For Everyone
· [💻] Data Science for Everyone
· [💻] Machine Learning with the Experts: School Budgets
· [💻] Machine Learning for Everyone
· [💻] Data Science for Managers
· [ ] Facebook: Field Guide to Machine Learning
· [G] Introduction to Machine Learning Problem Framing
· [ ] Pluralsight: How to Think About Machine Learning Algorithms
· [ ] State of AI Report 2020
· [📺 ] Vincent Warmerdam: The profession of solving (the wrong problem) | PyData Amsterdam 2019
· [📺 ] Hugging Face, Transformers | NLP Research and Open Source | Interview with Julien Chaumond
· [📺 ] Vincent Warmerdam — Playing by the Rules-Based-Systems | PyData Eindhoven 2020
· [📺 ] Building intuitions before building models
Be familiar with data ethics
- [📰] How to Detect Bias in AI
- [ ] Netflix: Coded Bias
- [ ] Netflix: The Great Hack
- [ ] Netflix: The Social Dilemma
- [ ] Practical Data Ethics
- [ ] Lesson 1: Disinformation
- [ ] Lesson 2: Bias & Fairness
- [ ] Lesson 3: Ethical Foundations & Practical Tools
- [ ] Lesson 4: Privacy and surveillance
- [ ] Lesson 4 continued: Privacy and surveillance
- [ ] Lesson 5.1: The problem with metrics
- [ ] Lesson 5.2: Our Ecosystem, Venture Capital, & Hypergrowth
- [ ] Lesson 5.3: Losing the Forest for the Trees, guest lecture by Ali Alkhatib
- [ ] Lesson 6: Algorithmic Colonialism, and Next Steps
- [📺 ] Lecture 9: Ethics (Full Stack Deep Learning — Spring 2021) 1:04:50
- [📺 ] SE4AI: Ethics and Fairness 1:18:37
- [📺 ] SE4AI: Security 1:18:24
- [📺 ] SE4AI: Safety 1:17:37
Be able to import data from multiple sources
- [ ] Docs: Beautiful Soup Documentation
- [💻] Importing Data in Python (Part 2)
- [💻] Web Scraping in Python
Be able to setup data annotation efficiently
https://www.youtube.com/watch?v=iPmQ5ezQNPY
- [📰] Create A Synthetic Image Dataset — The “What”, The “Why” and The “How”
- [📰] We need Synthetic Data
- [📰] Weak Supervision for Online Discussions
- [📰] Machine Learning Infrastructure Tools for Data Preparation
- [📰] Exploring the Role of Human Raters in Creating NLP Datasets
- [📰] Inter-Annotator Agreement (IAA)
- [📰] How to compute inter-rater reliability metrics (Cohen’s Kappa, Fleiss’s Kappa, Cronbach Alpha, Krippendorff Alpha, Scott’s Pi, Inter-class correlation) in Python
- [📺 ] Snorkel: Dark Data and Machine Learning — Christopher Ré
- [📺 ] Training a NER Model with Prodigy and Transfer Learning
- [📺 ] Training a New Entity Type with Prodigy — annotation powered by active learning
- [📺 ] ECCV 2020 WSL tutorial: 4. Human-in-the-loop annotations
- [📺 ] Active Learning: Why Smart Labeling is the Future of Data Annotation | Alectio
- [📺 ] Lecture 8: Data Management (Full Stack Deep Learning — Spring 2021) 0:59:42
- [📺 ] Lab 6: Data Labeling (Full Stack Deep Learning — Spring 2021) 0:05:06
- [📺 ] Lecture 6: Data Management
- [📺 ] SE4AI: Data Quality 1:07:15
- [📺 ] SE4AI: Data Programming and Intro to Big Data Processing 0:33:04
- [📺 ] SE4AI: Managing and Processing Large Datasets 1:21:27
Be able to manipulate data with Numpy
- [📰] A Visual Intro to NumPy and Data Representation
- [📰] Good practices with numpy random number generators
- [📰] NumPy Illustrated: The Visual Guide to NumPy
- [📰] NumPy Fundamentals for Data Science and Machine Learning
- [💻] Intro to Python for Data Science
- [ ] Pluralsight: Working with Multidimensional Data Using NumPy
Be able to manipulate data with Pandas
- [📰] Visualizing Pandas’ Pivoting and Reshaping Functions
- [📰] A Gentle Visual Intro to Data Analysis in Python Using Pandas
- [📰] Comprehensive Guide to Grouping and Aggregating with Pandas
- [📰] 8 Python Pandas Value_counts() tricks that make your work more efficient
- [💻] pandas Foundations
- [💻] Pandas Joins for Spreadsheet Users
- [💻] Manipulating DataFrames with pandas
- [💻] Merging DataFrames with pandas
- [💻] Data Manipulation with pandas
- [💻] Optimizing Python Code with pandas
- [💻] Streamlined Data Ingestion with pandas
- [💻] Analyzing Marketing Campaigns with pandas
- [ ] edX: Implementing Predictive Analytics with Spark in Azure HDInsight
- [📰] Modern Pandas
- [ ] Modern Pandas (Part 1)
- [ ] Modern Pandas (Part 2)
- [ ] Modern Pandas (Part 3)
- [ ] Modern Pandas (Part 4)
- [ ] Modern Pandas (Part 5)
- [ ] Modern Pandas (Part 6)
- [ ] Modern Pandas (Part 7)
- [ ] Modern Pandas (Part 8)
Be able to manipulate data in spreadsheets
- [💻] Spreadsheet basics
- [💻] Data Analysis with Spreadsheets
- [💻] Intermediate Spreadsheets for Data Science
- [💻] Pivot Tables with Spreadsheets
- [💻] Data Visualization in Spreadsheets
- [💻] Introduction to Statistics in Spreadsheets
- [💻] Conditional Formatting in Spreadsheets
- [💻] Marketing Analytics in Spreadsheets
- [💻] Error and Uncertainty in Spreadsheets
- [ ] edX: Analyzing and Visualizing Data with Excel
Be able to manipulate data in databases
- [ ] Codecademy: SQL Track
- [💻] Intro to SQL for Data Science
- [💻] Introduction to MongoDB in Python
- [💻] Intermediate SQL
- [💻] Exploratory Data Analysis in SQL
- [💻] Joining Data in PostgreSQL
- [💻] Querying with TransactSQL
- [💻] Introduction to Databases in Python
- [💻] Reporting in SQL
- [💻] Applying SQL to Real-World Problems
- [💻] Analyzing Business Data in SQL
- [💻] Data-Driven Decision Making in SQL
- [💻] Database Design
- [🅤]ꭏ SQL for Data Analysis
- [🅤]ꭏ Intro to relational database
- [🅤]ꭏ Database Systems Concepts & Design
Be able to use Linux
Resources:
Bash Proficiency In Under 15 Minutes
Cheat sheet and in-depth explanations located below main article contents… The UNIX shell program interprets user…bryanguner.medium.com
These Are The Bash Shell Commands That Stand Between Me And Insanity
I will not profess to be a bash shell wizard… but I have managed to scour some pretty helpful little scripts from Stack…levelup.gitconnected.com
Bash Commands That Save Me Time and Frustration
Here’s a list of bash commands that stand between me and insanity.medium.com
Life Saving Bash Scripts Part 2
I am not saying they’re in any way special compared with other bash scripts… but when I consider that you can never…medium.com
What Are Bash Aliases And Why Should You Be Using Them!
A Bash alias is a method of supplementing or overriding Bash commands with new ones. Bash aliases make it easy for…bryanguner.medium.com
BASH CHEAT SHEET
My Bash Cheatsheet Index:bryanguner.medium.com
> holy grail of learning bash
- [📰] Streamline your projects using Makefile
- [📰] Understand Linux Load Averages and Monitor Performance of Linux
- [📰] Command-line Tools can be 235x Faster than your Hadoop Cluster
- [ ] Calmcode: makefiles
- [ ] Calmcode: entr
- [ ] Codecademy: Learn the Command Line
- [💻] Introduction to Shell for Data Science
- [💻] Introduction to Bash Scripting
- [💻] Data Processing in Shell
- [ ] MIT: The Missing Semester of CS Education
- [ ] Lecture 1: Course Overview + The Shell (2020) 0:48:16
- [ ] Lecture 2: Shell Tools and Scripting (2020) 0:48:55
- [ ] Lecture 3: Editors (vim) (2020) 0:48:26
- [ ] Lecture 4: Data Wrangling (2020) 0:50:03
- [ ] Lecture 5: Command-line Environment (2020) 0:56:06
- [ ] Lecture 6: Version Control (git) (2020) 1:24:59
- [ ] Lecture 7: Debugging and Profiling (2020) 0:54:13
- [ ] Lecture 8: Metaprogramming (2020) 0:49:52
- [ ] Lecture 9: Security and Cryptography (2020) 1:00:59
- [ ] Lecture 10: Potpourri (2020) 0:57:54
- [ ] Lecture 11: Q&A (2020) 0:53:52
- [ ] Thoughtbot: Mastering the Shell
- [ ] Thoughtbot: tmux
- [🅤]ꭏ Linux Command Line Basics
- [🅤]ꭏ Shell Workshop
- [🅤]ꭏ Configuring Linux Web Servers
- [ ] Web Bos: Command Line Power User
- [📺 ] GNU Parallel
Be able to perform feature selection and engineering
- [📰] Tips for Advanced Feature Engineering
- [📰] Preparing data for a machine learning model
- [📰] Feature selection for a machine learning model
- [📰] Learning from imbalanced data
- [📰] Hacker’s Guide to Data Preparation for Machine Learning
- [📰] Practical Guide to Handling Imbalanced Datasets
- [💻] Analyzing Social Media Data in Python
- [💻] Dimensionality Reduction in Python
- [💻] Preprocessing for Machine Learning in Python
- [💻] Data Types for Data Science
- [💻] Cleaning Data in Python
- [💻] Feature Engineering for Machine Learning in Python
- [💻] Importing & Managing Financial Data in Python
- [💻] Manipulating Time Series Data in Python
- [💻] Working with Geospatial Data in Python
- [💻] Analyzing IoT Data in Python
- [💻] Dealing with Missing Data in Python
- [💻] Exploratory Data Analysis in Python
- [ ] edX: Data Science Essentials
- [🅤]ꭏ Creating an Analytical Dataset
- [📺 ] AppliedMachine Learning 2020–04 — Preprocessing 1:07:40
- [📺 ] AppliedMachine Learning 2020–11 — Model Inspection and Feature Selection 1:15:15
Be able to experiment in a notebook
- [📰] Securely storing configuration credentials in a Jupyter Notebook
- [📰] Automatically Reload Modules with %autoreload
- [ ] Calmcode: ipywidgets
- [ ] Documentation: Jupyter Lab
- [ ] Pluralsight: Getting Started with Jupyter Notebook and Python
- [📺 ] William Horton — A Brief History of Jupyter Notebooks
- [📺 ] I Like Notebooks
- [📺 ] I don’t like notebooks.- Joel Grus (Allen Institute for Artificial Intelligence)
- [📺 ] Ryan Herr — After model.fit, before you deploy| JupyterCon 2020
- [📺 ] nbdev live coding with Hamel Husain
- [📺 ] How to Use JupyterLab
Be able to visualize data
- [📰] Creating a Catchier Word Cloud Presentation
- [📰] Effectively Using Matplotlib
- [💻] Introduction to Data Visualization with Python
- [💻] Introduction to Seaborn
- [💻] Introduction to Matplotlib
- [💻] Intermediate Data Visualization with Seaborn
- [💻] Visualizing Time Series Data in Python
- [💻] Improving Your Data Visualizations in Python
- [💻] Visualizing Geospatial Data in Python
- [💻] Interactive Data Visualization with Bokeh
- [📺 ] AppliedMachine Learning 2020–02 Visualization and matplotlib 1:07:30
Be able to model problems mathematically
- [ ] 3Blue1Brown: Essence of Calculus
- [ ] The Essence of Calculus, Chapter 1 0:17:04
- [ ] The paradox of the derivative | Essence of calculus, chapter 2 0:17:57
- [ ] Derivative formulas through geometry | Essence of calculus, chapter 3 0:18:43
- [ ] Visualizing the chain rule and product rule | Essence of calculus, chapter 4 0:16:52
- [ ] What’s so special about Euler’s number e? | Essence of calculus, chapter 5 0:13:50
- [ ] Implicit differentiation, what’s going on here? | Essence of calculus, chapter 6 0:15:33
- [ ] Limits, L’Hôpital’s rule, and epsilon delta definitions | Essence of calculus, chapter 7 0:18:26
- [ ] Integration and the fundamental theorem of calculus | Essence of calculus, chapter 8 0:20:46
- [ ] What does area have to do with slope? | Essence of calculus, chapter 9 0:12:39
- [ ] Higher order derivatives | Essence of calculus, chapter 10 0:05:38
- [ ] Taylor series | Essence of calculus, chapter 11 0:22:19
- [ ] What they won’t teach you in calculus 0:16:22
- [ ] 3Blue1Brown: Essence of linear algebra
- [ ] Vectors, what even are they? | Essence of linear algebra, chapter 1 0:09:52
- [ ] Linear combinations, span, and basis vectors | Essence of linear algebra, chapter 2 0:09:59
- [ ] Linear transformations and matrices | Essence of linear algebra, chapter 3 0:10:58
- [ ] Matrix multiplication as composition | Essence of linear algebra, chapter 4 0:10:03
- [ ] Three-dimensional linear transformations | Essence of linear algebra, chapter 5 0:04:46
- [ ] The determinant | Essence of linear algebra, chapter 6 0:10:03
- [ ] Inverse matrices, column space and null space | Essence of linear algebra, chapter 7 0:12:08
- [ ] Nonsquare matrices as transformations between dimensions | Essence of linear algebra, chapter 8 0:04:27
- [ ] Dot products and duality | Essence of linear algebra, chapter 9 0:14:11
- [ ] Cross products | Essence of linear algebra, Chapter 10 0:08:53
- [ ] Cross products in the light of linear transformations | Essence of linear algebra chapter 11 0:13:10
- [ ] Cramer’s rule, explained geometrically | Essence of linear algebra, chapter 12 0:12:12
- [ ] Change of basis | Essence of linear algebra, chapter 13 0:12:50
- [ ] Eigenvectors and eigenvalues | Essence of linear algebra, chapter 14 0:17:15
- [ ] Abstract vector spaces | Essence of linear algebra, chapter 15 0:16:46
- [ ] 3Blue1Brown: Neural networks
- [ ] But what is a Neural Network? | Deep learning, chapter 1 0:19:13
- [ ] Gradient descent, how neural networks learn | Deep learning, chapter 2 0:21:01
- [ ] What is backpropagation really doing? | Deep learning, chapter 3 0:13:54
- [ ] Backpropagation calculus | Deep learning, chapter 4 0:10:17
- [📰] A Visual Tour of Backpropagation
- [📰] Entropy, Cross Entropy, and KL Divergence
- [📰] Interview Guide to Probability Distributions
- [📰] Introduction to Linear Algebra for Applied Machine Learning with Python
- [📰] Entropy of a probability distribution — in layman’s terms
- [📰] KL Divergence — in layman’s terms
- [📰] Probability Distributions
- [📰] Relearning Matrices as Linear Functions
- [📰] You Could Have Come Up With Eigenvectors — Here’s How
- [📰] PageRank — How Eigenvectors Power the Algorithm Behind Google Search
- [📰] Interactive Visualization of Why Eigenvectors Matter
- [📰] Cross-Entropy and KL Divergence
- [📰] Why Randomness Is Information?
- [📰] Basic Probability Theory
- [📰] Math You Need to Succeed InMachine Learning Interviews
- [📖] Basics of Linear Algebra for Machine Learning
- [💻] Introduction to Statistics in Python
- [💻] Foundations of Probability in Python
- [💻] Statistical Thinking in Python (Part 1)
- [💻] Statistical Thinking in Python (Part 2)
- [💻] Statistical Simulation in Python
- [ ] edX: Essential Statistics for Data Analysis using Excel
- [ ] Computational Linear Algebra for Coders
- [ ] Khan Academy: Precalculus
- [ ] Khan Academy: Probability
- [ ] Khan Academy: Differential Calculus
- [ ] Khan Academy: Multivariable Calculus
- [ ] Khan Academy: Linear Algebra
- [ ] MIT: 18.06 Linear Algebra (Professor Strang)
- [ ] 1. The Geometry of Linear Equations 0:39:49
- [ ] 2. Elimination with Matrices. 0:47:41
- [ ] 3. Multiplication and Inverse Matrices 0:46:48
- [ ] 4. Factorization into A = LU 0:48:05
- [ ] 5. Transposes, Permutations, Spaces R^n 0:47:41
- [ ] 6. Column Space and Nullspace 0:46:01
- [ ] 9. Independence, Basis, and Dimension 0:50:14
- [ ] 10. The Four Fundamental Subspaces 0:49:20
- [ ] 11. Matrix Spaces; Rank 1; Small World Graphs 0:45:55
- [ ] 14. Orthogonal Vectors and Subspaces 0:49:47
- [ ] 15. Projections onto Subspaces 0:48:51
- [ ] 16. Projection Matrices and Least Squares 0:48:05
- [ ] 17. Orthogonal Matrices and Gram-Schmidt 0:49:09
- [ ] 21. Eigenvalues and Eigenvectors 0:51:22
- [ ] 22. Diagonalization and Powers of A 0:51:50
- [ ] 24. Markov Matrices; Fourier Series 0:51:11
- [ ] 25. Symmetric Matrices and Positive Definiteness 0:43:52
- [ ] 27. Positive Definite Matrices and Minima 0:50:40
- [ ] 29. Singular Value Decomposition 0:40:28
- [ ] 30. Linear Transformations and Their Matrices 0:49:27
- [ ] 31. Change of Basis; Image Compression 0:50:13
- [ ] 33. Left and Right Inverses; Pseudoinverse 0:41:52
- [ ] StatQuest: Statistics Fundamentals
- [ ] StatQuest: Histograms, Clearly Explained 0:03:42
- [ ] StatQuest: What is a statistical distribution? 0:05:14
- [ ] StatQuest: The Normal Distribution, Clearly Explained!!! 0:05:12
- [ ] Statistics Fundamentals: Population Parameters 0:14:31
- [ ] Statistics Fundamentals: The Mean, Variance and Standard Deviation 0:14:22
- [ ] StatQuest: What is a statistical model? 0:03:45
- [ ] StatQuest: Sampling A Distribution 0:03:48
- [ ] Hypothesis Testing and The Null Hypothesis 0:14:40
- [ ] Alternative Hypotheses: Main Ideas!!! 0:09:49
- [ ] p-values: What they are and how to interpret them 0:11:22
- [ ] How to calculate p-values 0:25:15
- [ ] p-hacking: What it is and how to avoid it! 0:13:44
- [ ] Statistical Power, Clearly Explained!!! 0:08:19
- [ ] Power Analysis, Clearly Explained!!! 0:16:44
- [ ] Covariance and Correlation Part 1: Covariance 0:22:23
- [ ] Covariance and Correlation Part 2: Pearson’s Correlation 0:19:13
- [ ] StatQuest: R-squared explained 0:11:01
- [ ] The Central Limit Theorem 0:07:35
- [ ] StatQuickie: Standard Deviation vs Standard Error 0:02:52
- [ ] StatQuest: The standard error 0:11:43
- [ ] StatQuest: Technical and Biological Replicates 0:05:27
- [ ] StatQuest — Sample Size and Effective Sample Size, Clearly Explained 0:06:32
- [ ] Bar Charts Are Better than Pie Charts 0:01:45
- [ ] StatQuest: Boxplots, Clearly Explained 0:02:33
- [ ] StatQuest: Logs (logarithms), clearly explained 0:15:37
- [ ] StatQuest: Confidence Intervals 0:06:41
- [ ] StatQuickie: Thresholds for Significance 0:06:40
- [ ] StatQuickie: Which t test to use 0:05:10
- [ ] StatQuest: One or Two Tailed P-Values 0:07:05
- [ ] The Binomial Distribution and Test, Clearly Explained!!! 0:15:46
- [ ] StatQuest: Quantiles and Percentiles, Clearly Explained!!! 0:06:30
- [ ] StatQuest: Quantile-Quantile Plots (QQ plots), Clearly Explained 0:06:55
- [ ] StatQuest: Quantile Normalization 0:04:51
- [ ] StatQuest: Probability vs Likelihood 0:05:01
- [ ] StatQuest: Maximum Likelihood, clearly explained!!! 0:06:12
- [ ] Maximum Likelihood for the Exponential Distribution, Clearly Explained! V2.0 0:09:39
- [ ] Why Dividing By N Underestimates the Variance 0:17:14
- [ ] Maximum Likelihood for the Binomial Distribution, Clearly Explained!!! 0:11:24
- [ ] Maximum Likelihood For the Normal Distribution, step-by-step! 0:19:50
- [ ] StatQuest: Odds and Log(Odds), Clearly Explained!!! 0:11:30
- [ ] StatQuest: Odds Ratios and Log(Odds Ratios), Clearly Explained!!! 0:16:20
- [ ] Live 2020–04–20!!! Expected Values 0:33:00
- [🅤]ꭏ Eigenvectors and Eigenvalues
- [🅤]ꭏ Linear Algebra Refresher
- [🅤]ꭏ Statistics
- [🅤]ꭏ Intro to Descriptive Statistics
- [🅤]ꭏ Intro to Inferential Statistics
Be able to setup project structure
- [📰] pydantic
- [📰] Organizing machine learning projects: project management guidelines
- [📰] Logging and Debugging in Machine Learning — How to use Python debugger and the logging module to find errors in your AI application
- [📰] Best practices to write Deep Learning code: Project structure, OOP, Type checking and documentation
- [📰] Configuring Google Colab Like A Pro
- [📰] Stop using print, start using loguru in Python
- [📰] Hypermodern Python
- [📰] Hypermodern Python Chapter 2: Testing
- [📰] Hypermodern Python Chapter 3: Linting
- [📰] Hypermodern Python Chapter 4: Typing
- [📰] Hypermodern Python Chapter 5: Documentation
- [📰] Hypermodern Python Chapter 6: CI/CD
- [📰] Push and pull: when and why to update your dependencies
- [📰] Reproducible and upgradable Conda environments: dependency management with conda-lock
- [📰] Options for packaging your Python code: Wheels, Conda, Docker, and more
- [📰] Making model training scripts robust to spot interruptions
- [ ] Calmcode: logging
- [ ] Calmcode: tqdm
- [ ] Calmcode: virtualenv
- [ ] Coursera: Structuring Machine Learning Projects
- [ ] Doc: Python Lifecycle Training
- [💻] Introduction to Data Engineering
- [💻] Conda Essentials
- [💻] Conda for Building & Distributing Packages
- [💻] Software Engineering for Data Scientists in Python
- [💻] Designing Machine Learning Workflows in Python
- [💻] Object-Oriented Programming in Python
- [💻] Command Line Automation in Python
- [💻] Creating Robust Python Workflows
- [ ] Developing Python Packages
- [ ] Treehouse: Object Oriented Python
- [ ] Treehouse: Setup Local Python Environment
- [🅤]ꭏ Writing READMEs
- [📺 ] Lecture 1: Introduction to Deep Learning
- [📺 ] Lecture 2: Setting Up Machine Learning Projects
- [📺 ] Lecture 3: Introduction to the Text Recognizer Project
- [📺 ] Lecture 4: Infrastructure and Tooling
- [📺 ] Hydra configuration
- [📺 ] Continuous integration
- [📺 ] Data Engineering +Machine Learning + Software Engineering // Satish Chandra Gupta // MLOps Coffee Sessions #16
- [📺 ] OO Design and Testing Patterns for Machine Learning with Chris Gerpheide
- [📺 ] Tutorial: Sebastian Witowski — Modern Python Developer’s Toolkit
- [📺 ] Lecture 13:Machine Learning Teams (Full Stack Deep Learning — Spring 2021) 0:58:13
- [📺 ] Lecture 5:Machine Learning Projects (Full Stack Deep Learning — Spring 2021) 1:13:14
- [📺 ] Lecture 6: Infrastructure & Tooling (Full Stack Deep Learning — Spring 2021) 1:07:21
Be able to version control code
- [📰] Mastering Git Stash Workflow
- [📰] How to Become a Master of Git Tags
- [📰] How to track large files in Github / Bitbucket? Git LFS to the rescue
- [📰] Keep your git directory clean with git clean and git trash
- [ ] Codecademy: Learn Git
- [ ] Code School: Git Real
- [💻] Introduction to Git for Data Science
- [ ] Thoughtbot: Mastering Git
- [🅤]ꭏ GitHub & Collaboration
- [🅤]ꭏ How to Use Git and GitHub
- [🅤]ꭏ Version Control with Git
- [📺 ] 045 Introduction to Git LFS
- [📺 ] Git & Scripting
Be able to setup model validation
- [📰] Evaluating a machine learning model
- [📰] Validating your Machine Learning Model
- [📰] Measuring Performance: AUPRC and Average Precision
- [📰] Measuring Performance: AUC (AUROC)
- [📰] Measuring Performance: The Confusion Matrix
- [📰] Measuring Performance: Accuracy
- [📰] ROC Curves: Intuition Through Visualization
- [📰] Precision, Recall, Accuracy, and F1 Score for Multi-Label Classification
- [📰] The Complete Guide to AUC and Average Precision: Simulations and Visualizations
- [📰] Best Use of Train/Val/Test Splits, with Tips for Medical Data
- [📰] The correct way to evaluate online machine learning models
- [📰] Proxy Metrics
- [📺 ] Accuracy as a Failure
- [📺 ] AppliedMachine Learning 2020–09 — Model Evaluation and Metrics 1:18:23
- [📺 ] Machine Learning Fundamentals: Cross Validation 0:06:04
- [📺 ] Machine Learning Fundamentals: The Confusion Matrix 0:07:12
- [📺 ] Machine Learning Fundamentals: Sensitivity and Specificity 0:11:46
- [📺 ] Machine Learning Fundamentals: Bias and Variance 0:06:36
- [📺 ] ROC and AUC, Clearly Explained! 0:16:26
Be familiar with inner working of models
Bays theorem is super interesting and applicable ==> — [📰] Naive Bayes classification
- [📰] Linear regression
- [📰] Polynomial regression
- [📰] Logistic regression
- [📰] Decision trees
- [📰] K-nearest neighbors
- [📰] Support Vector Machines
- [📰] Random forests
- [📰] Boosted trees
- [📰] Hacker’s Guide to Fundamental Machine Learning Algorithms with Python
- [📰] Neural networks: activation functions
- [📰] Neural networks: training with backpropagation
- [📰] Neural Network from scratch-part 1
- [📰] Neural Network from scratch-part 2
- [📰] Perceptron to Deep-Neural-Network
- [📰] One-vs-Rest strategy for Multi-Class Classification
- [📰] Multi-class Classification — One-vs-All & One-vs-One
- [📰] One-vs-Rest and One-vs-One for Multi-Class Classification
- [📰] Deep Learning Algorithms — The Complete Guide
- [📰] Machine Learning Techniques Primer
- [ ] AWS: Understanding Neural Networks
- [📖] Grokking Deep Learning
- [📖] Make Your Own Neural Network
- [ ] Coursera: Neural Networks and Deep Learning
- [💻] Extreme Gradient Boosting with XGBoost
- [💻] Ensemble Methods in Python
- [ ] StatQuest: Machine Learning
- [ ] StatQuest: Fitting a line to data, aka least squares, aka linear regression. 0:09:21
- [ ] StatQuest: Linear Models Pt.1 — Linear Regression 0:27:26
- [ ] StatQuest: StatQuest: Linear Models Pt.2 — t-tests and ANOVA 0:11:37
- [ ] StatQuest: Odds and Log(Odds), Clearly Explained!!! 0:11:30
- [ ] StatQuest: Odds Ratios and Log(Odds Ratios), Clearly Explained!!! 0:16:20
- [ ] StatQuest: Logistic Regression 0:08:47
- [ ] Logistic Regression Details Pt1: Coefficients 0:19:02
- [ ] Logistic Regression Details Pt 2: Maximum Likelihood 0:10:23
- [ ] Logistic Regression Details Pt 3: R-squared and p-value 0:15:25
- [ ] Saturated Models and Deviance 0:18:39
- [ ] Deviance Residuals 0:06:18
- [ ] Regularization Part 1: Ridge (L2) Regression 0:20:26
- [ ] Regularization Part 2: Lasso (L1) Regression 0:08:19
- [ ] Ridge vs Lasso Regression, Visualized!!! 0:09:05
- [ ] Regularization Part 3: Elastic Net Regression 0:05:19
- [ ] StatQuest: Principal Component Analysis (PCA), Step-by-Step 0:21:57
- [ ] StatQuest: PCA main ideas in only 5 minutes!!! 0:06:04
- [ ] StatQuest: PCA — Practical Tips 0:08:19
- [ ] StatQuest: PCA in Python 0:11:37
- [ ] StatQuest: Linear Discriminant Analysis (LDA) clearly explained. 0:15:12
- [ ] StatQuest: MDS and PCoA 0:08:18
- [ ] StatQuest: t-SNE, Clearly Explained 0:11:47
- [ ] StatQuest: Hierarchical Clustering 0:11:19
- [ ] StatQuest: K-means clustering 0:08:57
- [ ] StatQuest: K-nearest neighbors, Clearly Explained 0:05:30
- [ ] Naive Bayes, Clearly Explained!!! 0:15:12
- [ ] Gaussian Naive Bayes, Clearly Explained!!! 0:09:41
- [ ] StatQuest: Decision Trees 0:17:22
- [ ] StatQuest: Decision Trees, Part 2 — Feature Selection and Missing Data 0:05:16
- [ ] Regression Trees, Clearly Explained!!! 0:22:33
- [ ] How to Prune Regression Trees, Clearly Explained!!! 0:16:15
- [ ] StatQuest: Random Forests Part 1 — Building, Using and Evaluating 0:09:54
- [ ] StatQuest: Random Forests Part 2: Missing data and clustering 0:11:53
- [ ] The Chain Rule 0:18:23
- [ ] Gradient Descent, Step-by-Step 0:23:54
- [ ] Stochastic Gradient Descent, Clearly Explained!!! 0:10:53
- [ ] AdaBoost, Clearly Explained 0:20:54
- [⨊ ] Part 1: Regression Main Ideas 0:15:52
- [⨊ ] Part 2: Regression Details 0:26:45
- [⨊ ] Part 3: Classification 0:17:02
- [⨊ ] Part 4: Classification Details 0:36:59
- [⨊ ] Support Vector Machines, Clearly Explained!!! 0:20:32
- [ ] Support Vector Machines Part 2: The Polynomial Kernel 0:07:15
- [ ] Support Vector Machines Part 3: The Radial (RBF) Kernel 0:15:52
- [ ] XGBoost Part 1: Regression 0:25:46
- [ ] XGBoost Part 2: Classification 0:25:17
- [ ] XGBoost Part 3: Mathematical Details 0:27:24
- [ ] XGBoost Part 4: Crazy Cool Optimizations 0:24:27
- [ ] StatQuest: Fiitting a curve to data, aka lowess, aka loess 0:10:10
- [ ] Statistics Fundamentals: Population Parameters 0:14:31
- [ ] Principal Component Analysis (PCA) clearly explained (2015) 0:20:16
- [ ] Decision Trees in Python from Start to Finish 1:06:23
- [🅤]ꭏ Classification Models
- [📺 ] Neural Networks from Scratch in Python
- [ ] Neural Networks from Scratch — P.1 Intro and Neuron Code 0:16:59
- [ ] Neural Networks from Scratch — P.2 Coding a Layer 0:15:06
- [ ] Neural Networks from Scratch — P.3 The Dot Product 0:25:17
- [ ] Neural Networks from Scratch — P.4 Batches, Layers, and Objects 0:33:46
- [ ] Neural Networks from Scratch — P.5 Hidden Layer Activation Functions 0:40:05
- [📺 ] AppliedMachine Learning 2020–03 Supervised learning and model validation 1:12:00
- [📺 ] AppliedMachine Learning 2020–05 — Linear Models for Regression 1:06:54
- [📺 ] AppliedMachine Learning 2020–06 — Linear Models for Classification 1:07:50
- [📺 ] AppliedMachine Learning 2020–07 — Decision Trees and Random Forests 1:07:58
- [📺 ] AppliedMachine Learning 2020–08 — Gradient Boosting 1:02:12
- [📺 ] AppliedMachine Learning 2020–18 — Neural Networks 1:19:36
- [📺 ] AppliedMachine Learning 2020–12 — AutoML (plus some feature selection) 1:25:38
Originally published at https://dev.to on November 12, 2020.
By Bryan Guner on November 12, 2020.
Exported from Medium on August 24, 2021.
Everything You Need to Get Started With VSCode + Extensions & Resources
Commands:
Everything You Need to Get Started With VSCode + Extensions & Resources
Every extension or tool you could possibly need
Here’s a rudimentary static site I made that goes into more detail on the extensions I use…
VSCodeExtensions
5fff5b9a2430bb564bfd451d--stoic-mccarthy-2c335f.netlify.app
Here’s the repo it was deployed from:
https://github.com/bgoonz/vscode-Extension-readmes
Commands:
Command Palette
Access all available commands based on your current context.
Keyboard Shortcut: Ctrl+Shift+P
⇧⌘P
Show all commands⌘P
Show files
Sidebars
⌘B
Toggle sidebar⇧⌘E
Explorer⇧⌘F
Search⇧⌘D
Debug⇧⌘X
Extensions⇧^G
Git (SCM)
Search
⌘F
Find⌥⌘F
Replace⇧⌘F
Find in files⇧⌘H
Replace in files
Panel
⌘J
Toggle panel⇧⌘M
Problems⇧⌘U
Output⇧⌘Y
Debug console^`
Terminal
View
⌘k
z
Zen mode⌘k
u
Close unmodified⌘k
w
Close all
Debug
F5
Start⇧F5
Stop⇧⌘F5
Restart^F5
Start without debuggingF9
Toggle breakpointF10
Step overF11
Step into⇧F11
Step out⇧⌘D
Debug sidebar⇧⌘Y
Debug panel
Tips-N-Tricks:
Here is a selection of common features for editing code. If the keyboard shortcuts aren’t comfortable for you, consider installing a keymap extension for your old editor.
Tip: You can see recommended keymap extensions in the Extensions view with Ctrl+K Ctrl+M which filters the search to @recommended:keymaps
.
Multi cursor selection
To add cursors at arbitrary positions, select a position with your mouse and use Alt+Click (Option+click on macOS).
To set cursors above or below the current position use:
Keyboard Shortcut: Ctrl+Alt+Up or Ctrl+Alt+Down
You can add additional cursors to all occurrences of the current selection with Ctrl+Shift+L.
Note: You can also change the modifier to Ctrl/Cmd for applying multiple cursors with the editor.multiCursorModifier
setting . See Multi-cursor Modifier for details.
If you do not want to add all occurrences of the current selection, you can use Ctrl+D instead. This only selects the next occurrence after the one you selected so you can add selections one by one.
You can select blocks of text by holding Shift+Alt (Shift+Option on macOS) while you drag your mouse. A separate cursor will be added to the end of each selected line.
You can also use keyboard shortcuts to trigger column selection.
Extensions:
AutoHotkey Plus
> Syntax Highlighting, Snippets, Go to Definition, Signature helper and Code formatter
Bash Debug
> A debugger extension for Bash scripts based on bashdb
### Shellman
> Bash script snippets extension
> C/C++ — Preview C/C++ extension by Microsoft, read official blog post for the details
> Clangd — Provides C/C++ language IDE features for VS Code using clangd: code completion, compile errors and warnings, go-to-definition and cross references, include management, code formatting, simple refactorings.
> gnu-global-tags — Provide Intellisense for C/C++ with the help of the GNU Global tool.
> YouCompleteMe — Provides semantic completions for C/C++ (and TypeScript, JavaScript, Objective-C, Golang, Rust) using YouCompleteMe.
> C/C++ Clang Command Adapter — Completion and Diagnostic for C/C++/Objective-C using Clang command.
> CQuery — C/C++ language server supporting multi-million line code base, powered by libclang. Cross references, completion, diagnostics, semantic highlighting and more.
More
C#, ASP .NET and .NET Core
> C# — C# extension by Microsoft, read official documentation for the details
> C# FixFormat — Fix format of usings / indents / braces / empty lines
> C# Extensions — Provides extensions to the IDE that will speed up your development workflow.
CSS
CSS Peek
Peek or Jump to a CSS definition directly from HTML, just like in Brackets!
- stylelint — Lint CSS/SCSS.
- Autoprefixer Parse CSS,SCSS, LESS and add vendor prefixes automatically.
- Intellisense for CSS class names — Provides CSS class name completion for the HTML class attribute based on the CSS files in your workspace. Also supports React’s className attribute.
- VsCode Groovy Lint — Groovy lint, format, prettify and auto-fix
- haskell-linter
- Haskell IDE engine — provides language server for stack and cabal projects.
- autocomplate-shell
Java
JavaScript
- Babel JavaScript
- Visual Studio IntelliCode — This extension provides AI-assisted development features including autocomplete and other insights based on understanding your code context.
See the difference between these two here
> tslint — TSLint for Visual Studio Code (with "tslint.jsEnable": true
).
> Prettier — Linter, Formatter and Pretty printer for Prettier.
> Schema.org Snippets — Snippets for Schema.org.
> Code Spell Checker — Spelling Checker for Visual Studio Code.
Framework-specific:
Vetur — Toolkit for Vue.js
Debugger for Chrome
A VS Code extension to debug your JavaScript code in the Chrome browser, or other targets that support the Chrome Debugging Protocol.
Facebook Flow
- Flow Language Support — provides all the functionality you would expect — linting, intellisense, type tooltips and click-to-definition
- vscode-flow-ide — an alternative Flowtype extension for Visual Studio Code
TypeScript
- tslint — TSLint for Visual Studio Code
- TypeScript Hero — Code outline view of your open TS, sort and organize your imports.
Markdown
markdownlint
Linter for markdownlint.
Markdown All in One
All-in-one markdown plugin (keyboard shortcuts, table of contents, auto preview, list editing and more)
### Markdown Emoji
> Adds emoji syntax support to VS Code’s built-in Markdown preview
PHP
IntelliSense
These extensions provide slightly different sets of features. While the first one offers better autocompletion support, the second one seems to have more features overall.
Laravel
- Laravel 5 Snippets — Laravel 5 snippets for Visual Studio Code
- Laravel Blade Snippets — Laravel blade snippets and syntax highlight support
- Laravel Model Snippets — Quickly get models up and running with Laravel Model Snippets.
- Laravel Artisan — Laravel Artisan commands within Visual Studio Code
- DotENV — Support for dotenv file syntax
Other extensions
- Format HTML in PHP — Formatting for the HTML in PHP files. Runs before the save action so you can still have a PHP formatter.
- Composer
- PHP Debug — XDebug extension for Visual Studio Code
- PHP DocBlocker
- php cs fixer — PHP CS Fixer extension for VS Code, php formatter, php code beautify tool
- phpcs — PHP CodeSniffer for Visual Studio Code
- phpfmt — phpfmt for Visual Studio Code
Python
- Python — Linting, Debugging (multi threaded, web apps), Intellisense, auto-completion, code formatting, snippets, unit testing, and more.
TensorFlow
- TensorFlow Snippets — This extension includes a set of useful code snippets for developing TensorFlow models in Visual Studio Code.
Rust
- Rust — Linting, auto-completion, code formatting, snippets and more
Productivity
ARM Template Viewer
Displays a graphical preview of Azure Resource Manager (ARM) templates. The view will show all resources with the official Azure icons and also linkage between the resources.
### Azure Cosmos DB
> Browse your database inside the vs code editor
> Everything you need for the Azure IoT development: Interact with Azure IoT Hub, manage devices connected to Azure IoT Hub, and develop with code snippets for Azure IoT Hub
### Bookmarks
> Mark lines and jump to them
Color Tabs
> An extension for big projects or monorepos that colors your tab/titlebar based on the current package
### Create tests
> An extension to quickly generate test files.
### Deploy
> Commands for upload or copy files of a workspace to a destination.
### Duplicate Action
> Ability to duplicate files and directories.
Error Lens
> Show language diagnostics inline (errors/warnings/…).
### ES7 React/Redux/GraphQL/React-Native snippets
> Provides Javascript and React/Redux snippets in ES7
### Gi
> Generating .gitignore files made easy
### GistPad
> Allows you to manage GitHub Gists entirely within the editor. You can open, create, delete, fork, star and clone gists, and then seamlessly begin editing files as if they were local. It’s like your very own developer library for building and referencing code snippets, commonly used config/scripts, programming-related notes/documentation, and interactive samples.
### Git History
> View git log, file or line History
Git Project Manager
> Automatically indexes your git projects and lets you easily toggle between them
GitLink
> GoTo current file’s online link in browser and Copy the link in clipboard.
### GitLens
> Provides Git CodeLens information (most recent commit, # of authors), on-demand inline blame annotations, status bar blame information, file and blame history explorers, and commands to compare changes with the working tree or previous versions.
### Git Indicators
> Atom-like git indicators on active panel
### GitHub
> Provides GitHub workflow support. For example browse project, issues, file (the current line), create and manage pull request. Support for other providers (e.g. gitlab or bitbucket) is planned. Have a look at the README.md on how to get started with the setup for this extension.
GitHub Pull Request Monitor
> This extension uses the GitHub api to monitor the state of your pull requests and let you know when it’s time to merge or if someone requested changes.
### GitLab Workflow
> Adds a GitLab sidebar icon to view issues, merge requests and other GitLab resources. You can also view the results of your GitLab CI/CD pipeline and check the syntax of your .gitlab-ci.yml
.
Gradle Tasks
> Run gradle tasks in VS Code.
### Icon Fonts
> Snippets for popular icon fonts such as Font Awesome, Ionicons, Glyphicons, Octicons, Material Design Icons and many more!
Import Cost
> This extension will display inline in the editor the size of the imported package. The extension utilizes webpack with babili-webpack-plugin in order to detect the imported size.
Jira and Bitbucket
> Bringing the power of Jira and Bitbucket to VS Code — With Atlassian for VS Code you can create and view issues, start work on issues, create pull requests, do code reviews, start builds, get build statuses and more!
> Provides annotations on function calls in JS/TS files to provide parameter names to arguments.
### Jumpy
> Provides fast cursor movement, inspired by Atom’s package of the same name.
### Kanban
Simple Kanban board for use in Visual Studio Code, with time tracking and Markdown support.
Live Server
> Launch a development local Server with live reload feature for static & dynamic pages.
> Override the regular Copy and Cut commands to keep selections in a clipboard ring
ngrok for VSCode
> ngrok allows you to expose a web server running on your local machine to the internet. Just tell ngrok what port your web server is listening on. This extension allows you to control ngrok from the VSCode command palette
### Instant Markdown
> Simply, edit markdown documents in vscode and instantly preview it in your browser as you type.
### npm Intellisense
> Visual Studio Code plugin that autocompletes npm modules in import statements.
### Parameter Hints
> Provides parameter hints on function calls in JS/TS/PHP files.
### Partial Diff
> Compare (diff) text selections within a file, across different files, or to the clipboard
> Infer the structure of JSON and paste is as types in many programming languages
> Visual Studio Code plugin that autocompletes filenames
### Power Tools
> Extends Visual Studio Code via things like Node.js based scripts or shell commands, without writing separate extensions
### PrintCode
> PrintCode converts the code being edited into an HTML file, displays it by browser and prints it.
### Project Manager
> Easily switch between projects.
Project Dashboard
> VSCode Project Dashboard is a Visual Studio Code extension that lets you organize your projects in a speed-dial like manner. Pin your frequently visited folders, files, and SSH remotes onto a dashboard to access them quickly.
### Rainbow CSV
> Highlight columns in comma, tab, semicolon and pipe separated files, consistency check and linting with CSVLint, multi-cursor column editing, column trimming and realignment, and SQL-style querying with RBQL.
> Allows users to open any folder in a container, on a remote machine, container or in Windows Subsystem for Linux(WSL) and take advantage of VS Code’s full feature set.
### Remote VSCode
> Allow user to edit files from Remote server in Visual Studio Code directly.
REST Client
> Allows you to send HTTP request and view the response in Visual Studio Code directly.
### Settings Sync
> Synchronize settings, snippets, themes, file icons, launch, key bindings, workspaces and extensions across multiple machines using GitHub Gist
### Text Power Tools
> All-in-one extension for text manipulation: filtering (grep), remove lines, insert number sequences and GUIDs, format content as table, change case, converting numbers and more. Great for finding information in logs and manipulating text.
### Todo Tree
> Custom keywords, highlighting, and colors for TODO comments. As well as a sidebar to view all your current tags.
### Toggle Quotes
> Cycle between single, double and backtick quotes
> TypeScript Language Service Plugin providing a set of source actions for easy objects destructuring
### WakaTime
> Automatic time tracker and productivity dashboard showing how long you coded in each project, file, branch, and language.
Formatting & Beautification
Better Align
> Align your code by colon(:), assignment(=,+=,-=,*=,/=) and arrow(=>). It has additional support for comma-first coding style and trailing comment.
> And it doesn’t require you to select what to be aligned, the extension will figure it out by itself.
### Auto Close Tag
> Automatically add HTML/XML close tag, same as Visual Studio IDE or Sublime Text
### Auto Rename Tag
> Auto rename paired HTML/XML tags
### beautify
> Beautify code in place for VS Code
html2pug
> Transform html to pug inside your Visual Studio Code, forget about using an external page anymore.
ECMAScript Quotes Transformer
> Transform quotes of ECMAScript string literals
### Paste and Indent
> Paste code with “correct” indentation
Sort Lines
> Sorts lines of text in specific order
### Surround
> A simple yet powerful extension to add wrapper templates around your code blocks.
### Wrap Selection
> Wraps selection or multiple selections with symbol or multiple symbols
Formatting Toggle
> Allows you to toggle your formatter on and off with a simple click
Bracket Pair Colorizer
> This extension allows matching brackets to be identified with colours. The user can define which characters to match, and which colours to use.
### Auto Import
> Automatically finds, parses and provides code actions and code completion for all available imports. Works with Typescript and TSX.
shell-format
> shell script & Dockerfile & dotenv format
> Quickly translate selected text right in your code
Material Icon Theme
Browser Preview
> Browser Preview for VS Code enables you to open a real browser preview inside your editor that you can debug. Browser Preview is powered by Chrome Headless, and works by starting a headless Chrome instance in a new process. This enables a secure way to render web content inside VS Code, and enables interesting features such as in-editor debugging and more!
> FYI:… I HAVE TRIED ENDLESSLEY TO GET THE DEBUGGER TO WORK IN VSCODE BUT IT DOES NOT… I SUSPECT THAT’S WHY IT HAS A 3 STAR RATING FOR AN OTHERWISE PHENOMINAL EXTENSION.
### CodeRoad
> Play interactive tutorials in your favorite editor.
### Code Runner
> Run code snippet or code file for multiple languages: C, C++, Java, JavaScript, PHP, Python, Perl, Ruby, Go, Lua, Groovy, PowerShell, BAT/CMD, BASH/SH, F# Script, C# Script, VBScript, TypeScript, CoffeeScript, Scala, Swift, Julia, Crystal, OCaml Script
### Code Time
> Automatic time reports by project and other programming metrics right in VS Code.
### Color Highlight
> Highlight web colors in your editor
### Output Colorizer
> Syntax highlighting for the VS Code Output Panel and log files
### Dash
> Dash integration in Visual Studio Code
> Leverage your favourite shell commands to edit text
> Editor Config for VS Code
ftp-sync
> Auto-sync your work to remote FTP server
> Highlights matching tags in the file.
Indent Rainbow
> A simple extension to make indentation more readable.
> Create a secure password using our generator tool. Help prevent a security threat by getting a strong password today.
### PlatformIO
> An open source ecosystem for IoT development: supports 350+ embedded boards, 20+ development platforms, 10+ frameworks. Arduino and ARM mbed compatible.
### Polacode
> Polaroid for your code 📸.
> Note: Polacode no longer works as of the most recent update… go for Polacode2020 or CodeSnap…
### Quokka
This one is super cool!
> Rapid prototyping playground for JavaScript and TypeScript in VS Code, with access to your project’s files, inline reporting, code coverage and rich output formatting.
### Slack
> Send messages and code snippets, upload files to Slack
Personally I found this extension to slow down my editor in addition to confliction with other extensions: (I have over 200 as of this writing)….. yes I have been made fully aware that I have a problem and need to get help
### Spotify
No real advantage over just using Spotify normally… it’s problematic enough in implementation that you won’t save any time using it. Further, it’s a bit tricky to configure … or at least it was the last time I tried syncing it with my spotify account.
> Provides integration with Spotify Desktop client. Shows the currently playing song in status bar, search lyrics and provides commands for controlling Spotify with buttons and hotkeys.
### SVG
> A Powerful SVG Language Support Extension(beta). Almost all the features you need to handle SVG.
### SVG Viewer
> View an SVG in the editor and export it as data URI scheme or PNG.
Text Marker (Highlighter)
> Highlight multiple text patterns with different colors at the same time. Highlighting a single text pattern can be done with the editor’s search functionality, but it cannot highlight multiple patterns at the same time, and this is where this extension comes handy.
### ESDOC MDN
THIS IS A MUST HAVE
> Quickly bring up helpful MDN documentation in the editor
In the interest of not making the reader scroll endlessly as I often do… I’ve made a separate post for that here. If you’ve made it this far, I thank you!
My Favorite VSCode Themes
Themesbryanguner.medium.com
If you found this guide helpful feel free to checkout my GitHub/gists where I host similar content:
bgoonz’s gists
Instantly share code, notes, and snippets. Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python |…gist.github.com
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Discover More:
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
By Bryan Guner on March 18, 2021.
Exported from Medium on August 24, 2021.
Everything You Need To Know About Relational Databases, SQL, PostgreSQL and Sequelize To Build…
For Front end developers who like myself struggle with making the jump to fullstack.
CODEX
Everything You Need To Know About Relational Databases, SQL, PostgreSQL and Sequelize To Build Your Backend!
For Front end developers who like myself struggle with making the jump to fullstack.
You can access and query the data using the findByPk, findOne, and findAll methods.
Terminology:
- NodeJS We re going to use this to run JavaScript code on the server. I ve decided to use the latest version of Node, v6.3.0 at the time of writing, so that we ll have access to most of the new features introduced in ES6.
- Express As per their website, Express is a Fast, unopinionated, minimalist web framework for Node.js , that we re going to be building our Todo list application on.
- PostgreSQL This is a powerful open-source database that we re going to use. I ve attached an article I published on the setup below!
- However, if you face issues while installing PostgreSQL, or you don t want to dive into installing it, you can opt for a version of PostgreSQL hosted online. I recommend ElephantSQL. I found it s pretty easy to get started with. However, the free version will only give you a 20MB allowance.
- Sequelize In addition, we re going to use Sequelize, which is a database ORM that will interface with the Postgres database for us.
RDBMS and Database Entities
Define what a relational database management system is
- RDBMS stands for Relational Database Management System
- A software application that you run that your programs can connect to so that they can store, modify, and retrieve data.
- An RDBMS can track many databases. We will use PostgreSQL, or postgres , primarily for our RDBMS and it will be able to create individual databases for each of our projects.
Describe what relational data is
- In general, relational data is information that is connected to other pieces of information.
- When working with relational databases, we can connect two entries together utilizing foreign keys (explained below).
- In a pets database, we could be keeping track of dogs and cats as well as the toys that each of them own. That ownership of a cat to a toy is the relational aspect of relational data. Two pieces of information that can be connected together to show some sort of meaning.
Define what a database is
- The actual location that data is stored.
- A database can be made up of many tables that each store specific kinds of information.
- We could have a pets database that stores information about many different types of animals. Each animal type could potentially be represented by a different table.
Define what a database table is
- Within a database, a table stores one specific kind of information.
- The records (entries) on these tables can be connected to records on other tables through the use of foreign keys
- In our pets database, we could have a dogs table, with individual records
Describe the purpose of a primary key
- A primary key is used in the database as a unique identifier for the table.
- We often use an id field that simply increments with each entry. The incrementing ensures that each record has a unique identifier, even if their are other fields of the record that are repeated (two people with the same name would still need to have a unique identifier, for example).
- With a unique identifier, we can easily connect records within the table to records from other tables.
Describe the purpose of a foreign key
- A foreign key is used as the connector from this record to the primary key of another table s record.
- In our pets example, we can imagine two tables to demonstrate: a table to represent cats and a table to represent toys. Each of these tables has a primary key of id that is used as the unique identifier. In order to make a connection between a toy and a cat, we can add another field to the cat table called owner_id , indicating that it is a foreign key for the cat table. By setting a toy s owner_id to the same value as a particular cat s id , we can indicate that the cat is the owner of that toy.
Describe how to properly name things in PostgreSQL
- Names within postgres should generally consist of only lowercase letters, numbers, and underscores.
- Tables within a database are plural by convention, so a table for cats would typically be cats and office locations would be office_locations (all lowercase, underscores to replace spaces, plural)
Connect to an instance of PostgreSQL with the command line tool psql
- The psql command by default will try to connect to a database and username that matches your system s username
- We connect to a different database by providing an argument to the psql command
- psql pets
- To connect with a different username we can use the -U flag followed by the username we would like to use. To connect to the pets database as pets_user
- psql -U pets_user pets
- If there is a password for the user, we can tell psql that we would like a prompt for the password to show up by using the -W flag.
- psql -U pets_user -W pets (the order of our flags doesn t matter, as long as any arguments associated with them are together, such as pets_user directly following -U in this example)
Identify whether a user is a normal user or a superuser by the prompt in the psql shell
- You can tell if you are logged in as a superuser or normal user by the prompt in the terminal.
- If the prompt shows =>, the user is a normal user
- If the prompt show =#, the user is a superuser
Create a user for the relational database management system
- Within psql, we can create a user with the CREATE USER {username} {WITH options} command.
- The most common options we ll want to use are WITH PASSWORD ‘mypassword’ to provide a password for the user we are creating, CREATEDB to allow the user to create new databases, or SUPERUSER to create a user with all elevated permissions.
Create a database in the database management system
- We can use the command CREATE DATABASE {database name} {options} inside psql to create a new database.
- A popular option we may utilize is WITH OWNER {owner name} to set another user as the owner of the database we are making.
Configure a database so that only the owner (and superusers) can connect to it
- We can GRANT and REVOKE privileges from a database to users or categories of users.
- In order to remove connection privileges to a database from the public we can use REVOKE CONNECT ON DATABASE {db_name} FROM PUBLIC;, removing all public connection access.
- If we wanted to grant it back, or to a specific user, we could similarly do GRANT CONNECT ON DATABASE {db_name} FROM {specific user, PUBLIC, etc.};
View a list of databases in an installation of PostgreSQL
- To list all databases we can use the \l or \list command in psql.
Create tables in a database
CREATE TABLE {table name} (
{columnA} {typeA},
{columnB} {typeB},
etc…
);
- The whitespace does not matter. Creating the SQL statements on multiple lines is easier to read, but just like JavaScript, they can be presented differently.
- One common issue is that SQL does not like trailing commas, so the last column cannot have a comma after its type in this example.
View a list of tables in a database
- To list all database tables, use the \dt command.
Identify and describe the common data types used in PostgreSQL
- There are many different data types that we can use in our tables, here are some common examples:
- SERIAL: autoincrementing, very useful for IDs
- VARCHAR(n): a string with a character limit of n
- TEXT: doesn t have character limit, but less performant
- BOOLEAN: true/false
- SMALLINT: signed two-byte integer (-32768 to 32767)
- INTEGER: signed four-byte integer (standard)
- BIGINT: signed eight-byte integer (very large numbers)
- NUMERIC: or DECIMAL, can store exact decimal values
- TIMESTAMP: date and time
Describe the purpose of the UNIQUE and NOT NULL constraints, and create columns in database tables that have them
- In addition to the data type, we can provide flags for constraints to place on our column data.
- The UNIQUE flag indicates that the data for the column must not be repeated.
- By default we can create entries in our tables that are missing data from columns. When creating a pet, maybe we don t provide an age because we don t know it, for example. If we want to require that the data be present in order to create a new record, we can indicate that column must be NOT NULL.
-
In the example below, we are requiring our pets to have unique names and for them to be present (both UNIQUE and NOT NULL). We have no such constraints on the age column, allowing repetition of ages or their complete absence.
CREATE TABLE pets (
id SERIAL PRIMARY KEY,
name VARCHAR(255) UNIQUE NOT NULL,
age SMALLINT
);
Create a primary key for a table
-
When creating a table we can indicate the primary key by passing in the column name to parentheses like so:
CREATE TABLE people (
id SERIAL,
first_name VARCHAR(50),
last_name VARCHAR(50),
PRIMARY KEY (id)
); -
We could have also used the PRIMARY KEY flag on the column definition itself:
CREATE TABLE people (
id SERIAL PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50)
);
Create foreign key constraints to relate tables
- In our table definition, we can use the line FOREIGN KEY (foreign_key_stored_in_this_table) REFERENCE {other table} ({other_tables_key_name}) to connect two tables.
-
This is probably easier to see in an example:
CREATE TABLE people (
id SERIAL PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50)
);CREATE TABLE pets (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
age SMALLINT,
person_id INTEGER,
FOREIGN KEY (person_id) REFERENCES people (id)
);
SQL is not case sensitive for its keywords but is for its entity names
- Exactly as the LO states, CREATE TABLE and create table are interpreted the same way. Using capitalization is a good convention in order to distinguish your keywords.
- The entity names that we use ARE case-sensitive, however. So a table named pets is unique from a table named Pets. In general, we prefer to use all lowercase for our entities to avoid any of this confusion.
SQL
- How to use the SELECT … FROM … statement to select data from a single table
- Supply the column names in the SELECT clause. If we want all columns, we can also use *
- Supply the table names in the FROM clause
— Selects all columns from the friends table
SELECT
*
FROM
friends;
— Selects the first_name column from the friends table (remember whitespace is ignored)
SELECT name
FROM friends;
- Sometimes we may need to specify what table we are selecting a column from, particulurly if we had joined multiple tables together.
— Notice here we are indicating that we want the “name” field from the “friends” table as well as the “name” field from the “puppies” table. We indicate the table name by table.column
— We are also aliasing these fields with the AS keyword so that our returned results have friend_name and puppy_name as field headers
SELECT
friends.name AS friend_name , puppies.name AS puppy_name
FROM
friends
JOIN
puppies ON friends.puppy_id = puppies.id
How to use the WHERE clause on SELECT, UPDATE, and DELETE statements to narrow the scope of the command
- The WHERE clause allows us to select or apply actions to records that match specific criteria instead of to a whole table.
- We can use WHERE with a couple of different operators when making our comparison
- WHERE {column} = {value} provides an exact comparison
- WHERE {column} IN ({value1}, {value2}, {value3}, etc.) matches any provided value in the IN statement. We can make this more complex by having a subquery inside of the parentheses, having our column match any values within the returned results.
- WHERE {column} BETWEEN {value1} AND {value2} can check for matches between two values (numeric ranges)
- WHERE {column} LIKE {pattern} can check for matches to a string. This is most useful when we use the wildcard %, such as WHERE breed LIKE ‘%Shepherd’, which will match any breed that ends in Shepherd
- The NOT operator can also be used for negation in the checks.
-
Mathematical operators can be used when performing calculations or comparisons within a query as well, such as
SELECT name, breed, weight_lbs FROM puppies WHERE weight_lbs > 50; — OR SELECT name, breed, age_yrs FROM puppies WHERE age_yrs * 10 = 5;
How to use the JOIN keyword to join two (or more) tables together into a single virtual table
- When we want to get information from a related table or do querying based on related table values, we can join the connected table by comparing the foreign key to where it lines up on the other table:
— Here we are joining the puppies table on to the friends table. We are specifying that the comparison we should make is the foreign key puppy_id on the friends table should line up with the primary key id on the puppies table.
SELECT
*
FROM
friends
JOIN
puppies ON friends.puppy_id = puppies.id
How to use the INSERT statement to insert data into a table
- When a table is already created we can then insert records into it using the INSERT INTO keywords.
- We provide the name of the table that we would like to add records to, followed by the VALUES keyword and each record we are adding. Here s an example:
— We are providing the table name, then multiple records to insert
— The values are listed in the order that they are defined on the table
INSERT INTO table_name
VALUES
(column1_value, colum2_value, column3_value),
(column1_value, colum2_value, column3_value),
(column1_value, colum2_value, column3_value);
- We can also specify columns when we are inserting data. This makes it clear which fields we are providing data for and allows us to provide them out of order, skip null or default values, etc.
— In this example, we want to use the default value for id since it is autoincremented, so we provide DEFAULT for this field
INSERT INTO friends (id, first_name, last_name)
VALUES
(DEFAULT, ‘Amy’, ‘Pond’);
— Alternatively, we can leave it out completely, since the default value will be used if none is provided
INSERT INTO friends (first_name, last_name)
VALUES
(‘Rose’, ‘Tyler’),
(‘Martha’, ‘Jones’),
(‘Donna’, ‘Noble’),
(‘River’, ‘Song’);
How to use an UPDATE statement to update data in a table
- The UPDATE keyword can be used to find records and change their values in our database.
- We generally follow the pattern of UPDATE {table} SET {column} = {new value} WHERE {match condition};.
- Without a condition to narrow our records down, we will update every record in the table, so this is an important thing to double check!
- We can update multiple fields as well by specifying each column in parentheses and their associated new values: UPDATE {table} SET ({column1}, {column2}) = ({value1}, {value2}) WHERE {match condition};
— Updates the pet with id of 4 to change their name and breed
UPDATE
pets
SET
(name, breed) = (‘Floofy’, ‘Fluffy Dog Breed’) WHERE id = 4;
How to use a DELETE statement to remove data from a table
- Similar to selecting records, we can delete records from a table by specifying what table we are deleting from and what criteria we would like to match in order to delete.
- We follow the general structure DELETE FROM {table} WHERE {condition};
- The condition here is also very important! Without a condition, all records match and will be deleted.
— Deletes from the pets table any record that either has a name Floofy, a name Doggo, or an id of 3.
DELETE FROM
pets
WHERE
name IN (‘Floofy’, ‘Doggo’) OR id = 3;
How to use a seed file to populate data in a database
- Seed files are a great way for us to create records that we want to start our database out with.
- Instead of having to individually add records to our tables or manually entering them in psql or postbird, we can create a file that has all of these records and then just pass this file to psql to run.
- Seed files are also great if we ever need to reset our database. We can clear out any records that we have by dropping all of our tables, then just run our seed files to get it into a predetermined starting point. This is great for our personal projects, testing environments, starting values for new tables we create, etc.
- There are two main ways we can use a seed file with psql, the < and the | operators. They perform the same function for us, just in slightly different orders, taking the content of a .sql file and executing in within the psql environment:
- psql -d {database} < {sql filepath}
- cat {sql filepath} | psql -d {database}
SQL (continued)
How to perform relational database design
- Steps to Designing the Database:
- Define the entities. What data are are you storing, what are the fields for each entity?
- You can think of this in similar ways to OOP (object oriented programming).
- If you wanted to model this information using classes, what classes would you make? Those are generally going to be the tables that are created in your database.
- The attributes of your classes are generally going to be the fields/columns that we need for each table.
- Identify primary keys. Most of the time these will be ids that you can generate as a serial field, incrementing with each addition to the database.
- Establish table relationships. Connect related data together with foreign keys. Know how we store these keys in a one-to-one, one-to-many, or many-to-many relationship.
- With a one-to-one or one-to-many relationship, we are able to use a foreign key on the table to indicate the other specific record that it is connected to.
- With a many-to-many relationship, each record could be connected to multiple records, so we have to create a join table to connect these entities. A record on this join table connects a record from one table to a record from another table.
How to use transactions to group multiple SQL commands into one succeed or fail operation
- We can define an explicit transaction using BEGIN and ending with either COMMIT or ROLLBACK.
-
If any command inside the block fails, everything will be rolled back. We can also specify that we want to roll back at the end of the block instead of committing. We saw that this can be useful when analyzing operations that would manipulate our database.
BEGIN;
UPDATE accounts SET balance = balance — 100.00
WHERE name = ‘Alice’;
UPDATE branches SET balance = balance — 100.00
WHERE name = (SELECT branch_name FROM accounts WHERE name = ‘Alice’);
UPDATE accounts SET balance = balance + 100.00
WHERE name = ‘Bob’;
UPDATE branches SET balance = balance + 100.00
WHERE name = (SELECT branch_name FROM accounts WHERE name = ‘Bob’);
COMMIT;BEGIN;
EXPLAIN ANALYZE
UPDATE cities
SET city = ‘New York City’
WHERE city = ‘New York’;
ROLLBACK;
How to apply indexes to tables to improve performance
- An index can help optimize queries that we have to run regularly. If we are constantly looking up records in a table by a particular field (such as username or phone number), we can add an index in order to speed up this process.
- An index maintains a sorted version of the field with a reference to the record that it points to in the table (via primary key). If we want to find a record based on a field that we have an index for, we can look through this index in a more efficient manner than having to scan through the entire table (generally O(log n) since the index is sorted, instead of O(n) for a sequential scan).
-
To add an index to a field we can use the following syntax:
CREATE INDEX index_name ON table_name (column_name);
-
To drop an index we can do the following:
DROP INDEX index_name
Making an index is not always the best approach. Indices allow for faster lookup, but slow down record insertion and the updating of associated fields, since we not only have to add the information to the table, but also manipulate the index.
We generally wouldn t care about adding an index if:
The tables are small
We are updating the table frequently, especially the associated columns
The column has many NULL values
Explain what the EXPLAIN command is used for:
- EXPLAIN gives us information about how a query will run (the query plan)
- It gives us an idea of how our database will search for data as well as a qualitative comparitor for how expensive that operation will be. Comparing the cost of two queries will tell us which one is more efficient (lower cost).
- We can also use the ANALYZE command with EXPLAIN, which will actually run the specified query. Doing so gives us more detailed information, such as the milliseconds it took our query to execute as well as specifics like the exact number of rows filtered and returned.
- Demonstrate how to install and use the node-postgres library and its Pool class to query a PostgreSQL-managed database
-
We can add the node-postgres library to our application with npm install pg. From there we will typically use the Pool class associated with this library. That way we can run many SQL queries with one database connection (as opposed to Client, which closes the connection after a query).
const { Pool } = require(‘pg’);
// If we need to specify a username, password, or database, we can do so when we create a Pool instance, otherwise the default values for logging in to psql are used:
const pool = new Pool({ username: ‘<>’, password: ‘<>’, database: ‘<>’})
-
The query method on the Pool instance will allow us to execute a SQL query on our database. We can pass in a string that represents the query we want to run
const allAirportsSql =
;
SELECT id, city_id, faa_id, name
FROM airports;
async function selectAllAirports() {
const results = await pool.query(allAirportsSql);
console.log(results.rows);
pool.end(); // invoking end() will close our connection to the database
}
selectAllAirports();
- The return value of this asynchronous function is an object with a rows key that points to an array of objects, each object representing a record with field names as keys.
Explain how to write prepared statements with placeholders for parameters of the form $1 , $2 , and so on
- The prepared statement (SQL string that we wrote) can also be made more dynamic by allowing for parameters to be passed in.
-
The Pool instance s query function allows us to pass a second argument, an array of parameters to be used in the query string. The location of the parameter substitutions are designated with $1, $2, etc., to signify the first, second, etc., arguments.
const airportsByNameSql =
;
SELECT name, faa_id
FROM airports
WHERE UPPER(name) LIKE UPPER($1)
async function selectAirportsByName(name) {
const results = await pool.query(airportsByNameSql, [ `%${name}%` ]);
console.log(results.rows);
pool.end(); // invoking end() will close our connection to the database
}
// Get the airport name from the command line and store it
// in the variable “name”. Pass that value to the
// selectAirportsByName function.
const name = process.argv[2];
// console.log(name);
selectAirportsByName(name);
ORM
- How to install, configure, and use Sequelize, an ORM for JavaScript
- To start a new project we use our standard npm initialize statement
- npm init -y
- Add in the packages we will need (sequelize, sequelize-cli, and pg)
- npm install sequelize@⁵.0.0 sequelize-cli@⁵.0.0 pg@⁸.0.0
- Initialize sequelize in our project
- npx sequelize-cli init
- Create a database user with credentials we will use for the project
- psql
- CREATE USER example_user WITH PASSWORD ‘badpassword’
-
Here we can also create databases since we are already in postgres
CREATE DATABASE example_app_development WITH OWNER example_user
CREATE DATABASE example_app_test WITH OWNER example_user
CREATE DATABASE example_app_production WITH OWNER example_user
If we don t create these databases now, we could also create them after we make our changes to our config file. If we take this approach, we need to make sure our user that we created has the CREATEDB option when we make them, since sequelize will attempt to make the databases with this user. This other approach would look like:
In psql: CREATE USER example_user WITH PASSWORD ‘badpassword’ CREATEDB
In terminal: npx sequelize-cli db:create
-
Double check that our configuration file matches our username, password, database, dialect, and seederStorage (these will be filled out for you in an assessment scenario):
{
“development”: {
“username”: “sequelize_recipe_box_app”,
“password”: “HfKfK79k”,
“database”: “recipe_box_development”,
“host”: “127.0.0.1”,
“dialect”: “postgres”,
“seederStorage”: “sequelize”
},
“test”: {
“username”: “sequelize_recipe_box_app”,
“password”: “HfKfK79k”,
“database”: “recipe_box_test”,
“host”: “127.0.0.1”,
“dialect”: “postgres”,
“seederStorage”: “sequelize”
},
“production”: {
“username”: “sequelize_recipe_box_app”,
“password”: “HfKfK79k”,
“database”: “recipe_box_production”,
“host”: “127.0.0.1”,
“dialect”: “postgres”,
“seederStorage”: “sequelize”
}
}
- How to use database migrations to make your database grow with your application in a source-control enabled way
Migrations
-
In order to make new database tables and sequelize models that reflect them, we want to generate a migration file and model file using model:generate
npx sequelize-cli model:generate — name Cat — attributes “firstName:string,specialSkill:string”
Here we are creating a migration file and a model file for a Cat. We are specifying that we want this table to have fields for firstName and specialSkill. Sequelize will automatically make fields for an id, createdAt, and updatedAt, as well, so we do not need to specify these.
Once our migration file is created, we can go in and edit any details that we need to. Most often we will want to add in database constraints such as allowNull: false, adding a uniqueness constraint with unique: true, adding in character limits to fields such as type: Sequelize.STRING(100), or specifying a foreign key with references to another table references: { model: ‘Categories’ }.
-
After we make any necessary changes to our migration file, we need to perform the migration, which will run the SQL commands to actually create the table.
npx sequelize-cli db:migrate
This command runs any migration files that have not been previously run, in the order that they were created (this is why the timestamp in the file name is important)
-
If we realize that we made a mistake after migrating, we can undo our previous migration, or all of our migrations. After undoing them, we can make any changes necessary to our migration files (They won t be deleted from the undo, so we don t need to generate anything! Just make the necessary changes to the files that already exist and save the files.). Running the migrations again will make the tables with the updates reflected.
npx sequelize-cli db:migrate:undo
npx sequelize-cli db:migrate:undo:all
Models Validations and Associations
- In addition to the migration files, our model:generate command also created a model file for us. This file is what allows sequelize to transform the results of its SQL queries into useful JavaScript objects for us.
-
The model is where we can specify a validation that we want to perform before trying to run a SQL query. If the validation fails, we can respond with a message instead of running the query, which can be an expensive operation that we know won t work.
// Before we make changes, sequelize generates the type that this field represents specification:
DataTypes.TEXT
// We can replace the generated format with an object to specify not only the type, but the validations that we want to implement. The validations can also take in messages the respond with on failure and arguments.
specification: {
type: DataTypes.TEXT,
validate: {
notEmpty: {
msg: 'The specification cannot be empty'
},
len: {
args: [10, 100]
msg: 'The specifcation must be between 10 and 100 characters'
}
}
} Another key part of the model file is setting up our associations. We can use the belongsTo, hasMany, and belongsToMany methods to set up model-level associations. Doing so is what creates the helpful functionality like addOwner that we saw in the pets example, a function that automatically generates the SQL necessary to create a petOwner record and supplies the appropriate petId and ownerId.
In a one-to-many association, we need to have a belongsTo association on the many side, and a hasMany association on the one side:
Instruction.belongsTo(models.Recipe, { foreignKey: ‘recipeId’ });
Recipe.hasMany(models.Instruction, { foreignKey: ‘recipeId’ });
-
In a many-to-many association, we need to have a belongsToMany on each side of the association. We generally specify a columnMapping object to show the association more clearly:
// In our Owner model
// To connect this Owner to a Pet through the PetOwner
const columnMapping = {
through: ‘PetOwner’,
// joins table
otherKey: ‘petId’,
// key that connects to other table we have a many association with foreignKey: ‘ownerId’
// our foreign key in the joins table
}
Owner.belongsToMany( models.Pet, columnMapping );
// In our Pet model
// To connect this Pet to an Owner through the PetOwner
const columnMapping = { through: ‘PetOwner’,
// joins table
otherKey: ‘ownerId’,
// key that connects to other table we have a many association with
foreignKey: ‘petId’
// our foreign key in the joins table
}
Pet.belongsToMany( models.Owner, columnMapping );
How to perform CRUD operations with Sequelize
- Seed Files
- Seed files can be used to populate our database with starter data.
- npx sequelize-cli seed:generate — name add-cats
- up indicates what to create when we seed our database, down indicates what to delete if we want to unseed the database.
-
For our up, we use the queryInterface.bulkInsert() method, which takes in the name of the table to seed and an array of objects representing the records we want to create:
up: (queryInterface, Sequelize) => {
return queryInterface.bulkInsert('<>', [{
field1: value1a,
field2: value2a
}, {
field1: value1b,
field2: value2b
}, {
field1: value1c,
field2: value2c
}]);
} -
For our down, we use the queryInterface.bulkDelete() method, which takes in the name of the table and an object representing our WHERE clause. Unseeding will delete all records from the specified table that match the WHERE clause.
// If we want to specify what to remove:
down: (queryInterface, Sequelize) => {
return queryInterface.bulkDelete(‘<>’, {
field1: [value1a, value1b, value1c], //…etc.
});
};
// If we want to remove everything from the table:
down: (queryInterface, Sequelize) => {
return queryInterface.bulkDelete(‘<>’, null, {});
}; Running npx sequelize-cli db:seed:all will run all of our seeder files.
npx sequelize-cli db:seed:undo:all will undo all of our seeding.
If we omit the :all we can run specific seed files
Inserting with Build and Create
In addition to seed files, which we generally use for starter data, we can create new records in our database by using build and save, or the combined create
-
Use the .build method of the Cat model to create a new Cat instance in index.js
// Constructs an instance of the JavaScript
Cat
class. Does not
// save anything to the database yet. Attributes are passed in as a
// POJO.
const newCat = Cat.build({
firstName: 'Markov',
specialSkill: 'sleeping',
age: 5
});
// This actually creates a newCats
record in the database. We must
// wait for this asynchronous operation to succeed.
await newCat.save();
// This builds and saves all in one step. If we don't need to perform any operations on the instance before saving it, this can optimize our code.
const newerCat = await Cat.create({
firstName: 'Whiskers',
specialSkill: 'sleeping',
age: 2
})
Updating Records
- When we have a reference to an instance of a model (i.e. after we have queried for it or created it), we can update values by simply reassigning those fields and using the save method
Deleting Records
- When we have a reference to an instance of a model, we can delete that record by using destroy
- const cat = await Cat.findByPk(1); // Remove the Markov record. await cat.destroy();
- We can also call destroy on the model itself. By passing in an object that specifies a where clause, we can destroy all records that match that query
- await Cat.destroy({ where: { specialSkill: ‘jumping’ } });
How to query using Sequelize
findAll
const cats = await Cat.findAll();
// Log the fetched cats.
// The extra arguments to stringify are a replacer and a space respectively
// Here we're specifying a space of 2 in order to print more legibly
// We don't want a replacer, so we pass null just so that we can pass a 3rd argument
console.log(JSON.stringify(cats, null, 2));
WHERE clause
- Passing an object to findAll can add on clauses to our query
- The where key takes an object as a value to indicate what we are filtering by
-
{ where: { field: value } } => WHERE field = value
const cats = await Cat.findAll({ where: { firstName: “Markov” } }); console.log(JSON.stringify(cats, null, 2));
OR in the WHERE clause
- Using an array for the value tells sequelize we want to match any of these values
{ where: { field: [value1, value2] } => WHERE field IN (value1, value2)
const cats = await Cat.findAll({ where: { firstName: [“Markov”, “Curie”] } });const cats = await Cat.findAll({
where: {
firstName: "Markov",
age: 4
}
});
console.log(JSON.stringify(cats, null, 2));
console.log(JSON.stringify(cats, null, 2));
AND in the WHERE clause
- Providing additional key/value pairs to the where object indicates all filters must match
- { where: { field1: value1, field2: value2 } } => WHERE field1 = value1 AND field2 = value2
Sequelize Op operator
- By requiring Op from the sequelize library we can provide more advanced comparison operators
- const { Op } = require(“sequelize”);
-
Op.ne: Not equal operator
const cats = await Cat.findAll({
where: {
firstName: {
// All cats where the name is not equal to "Markov"
// We use brackets in order to evaluate Op.ne and use the value as the key
[Op.ne]: "Markov"
},
},
});
console.log(JSON.stringify(cats, null, 2));
Op.and: and operator
const cats = await Cat.findAll({
where: {
// The array that Op.and points to must all be true
// Here, we find cats where the name is not "Markov" and the age is 4
[Op.and]: [{
firstName: {
[Op.ne]: "Markov"
}
}, {
age: 4
}, ],
},
});
console.log(JSON.stringify(cats, null, 2));
Op.or: or operator
const cats = await Cat.findAll({
where: {
// One condition in the array that Op.or points to must be true
// Here, we find cats where the name is "Markov" or where the age is 4
[Op.or]: [{
firstName: "Markov"
}, {
age: 4
}, ],
},
});
console.log(JSON.stringify(cats, null, 2));
Op.gt and Op.lt: greater than and less than operators
const cats = await Cat.findAll({ where: { // Find all cats where the age is greater than 4 age: { [Op.gt]: 4 }, } }, }); console.log(JSON.stringify(cats, null, 2));
Ordering results
- Just like the where clause, we can pass an order key to specify we want our results ordered
- The key order points to an array with the fields that we want to order by
- By default, the order is ascending, just like standard SQL. If we want to specify descending, we can instead use a nested array with the field name as the first element and DESC as the second element. (We could also specify ASC as a second element in a nested array, but it is unnecessary as it is default)
-
const cats = await Cat.findAll({ // Order by age descending, then by firstName ascending if cats have the same age order: [[“age”, “DESC”], “firstName”], }); console.log(JSON.stringify(cats, null, 2));
// Get a reference to the cat record that we want to update (here just the cat with primary key of 1)
const cat = await Cat.findByPk(1);
// Change cat's attributes.
cat.firstName = "Curie";
cat.specialSkill = "jumping";
cat.age = 123;
// Save the new name to the database.
await cat.save(); Limiting results
-
We can provide a limit key in order to limit our results to a specified number
const cats = await Cat.findAll({
order: [
["age", "DESC"]
],
// Here we are limiting our results to one record. It will still return an array, just with one object inside. We could have said any number here, the result is always an array.
limit: 1,
});
console.log(JSON.stringify(cats, null, 2));
findOne
- If we only want one record to be returned we can use findOne instead of findAll
- If multiple records would have matched our findOne query, it will return the first record
-
Unlike findAll, findOne will return the object directly instead of an array. If no records matched the query it will return null.
// finds the oldest cat const cat = await Cat.findOne({ order: [[“age”, “DESC”]], }); console.log(JSON.stringify(cat, null, 2));
Querying with Associations
We can include associated data by adding an include key to our options object
const pet = Pet.findByPk(1, {
include: [PetType, Owner]
});
console.log(pet.id, pet.name, pet.age, pet.petTypeId, pet.PetType.type, pet.Owners
We can get nested associations by having include point to an object that specifies which model we have an association with, then chaining an association on with another include
How to perform data validations with Sequelize
- See the database migrations section above.
-
In general, we add in a validate key to each field that we want validations for. This key points to an object that specifies all of the validations we want to make on that field, such as notEmpty, notNull, len, isIn, etc.
specification: {
type: DataTypes.TEXT,
validate: {
notEmpty: {
msg: 'The specification cannot be empty'
},
len: {
args: [10, 100]
msg: 'The specifcation must be between 10 and 100 characters'
}
}
}
How to use transactions with Sequelize
- We can create a transaction block in order to make sure either all operations are performed or none of them are
- We use the .transaction method in order to create our block. The method takes in a callback with an argument to track our transaction id (typically just a simple tx variable).
-
All of our sequelize operations can be passed a transaction key on their options argument which points to our transaction id. This indicates that this operation is part of the transaction block and should only be executed in the database when the whole block executes without error.
async function main() {
try {
// Do all database access within the transaction.
await sequelize.transaction(async (tx) => {
// Fetch Markov and Curie's accounts.
const markovAccount = await BankAccount.findByPk(
1, {
transaction: tx
},
);
const curieAccount = await BankAccount.findByPk(
2, {
transaction: tx
}
);
// No one can mess with Markov or Curie's accounts until the
// transaction completes! The account data has been locked!
// Increment Curie's balance by $5,000.
curieAccount.balance += 5000;
await curieAccount.save({
transaction: tx
});
// Decrement Markov's balance by $5,000.
markovAccount.balance -= 5000;
await markovAccount.save({
transaction: tx
});
});
} catch (err) {
// Report if anything goes wrong.
console.log("Error!");
for (const e of err.errors) {
console.log(
${e.instance.clientName}: ${e.message}
);
}
}
await sequelize.close();
}
main();
Sequelize Cheatsheet
Command Line
Sequelize provides utilities for generating migrations, models, and seed files. They are exposed through the sequelize-cli
command.
Init Project
$ npx sequelize-cli init
You must create a database user, and update the config/config.json
file to match your database settings to complete the initialization process.
Create Database
npx sequelize-cli db:create
Generate a model and its migration
npx sequelize-cli model:generate --name --attributes :,:,...
Run pending migrations
npx sequelize-cli db:migrate
Rollback one migration
npx sequelize-cli db:migrate:undo
Rollback all migrations
npx sequelize-cli db:migrate:undo:all
Generate a new seed file
npx sequelize-cli seed:generate --name
Run all pending seeds
npx sequelize-cli db:seed:all
Rollback one seed
npx sequelize-cli db:seed:undo
Rollback all seeds
npx sequelize-cli db:seed:undo:all
Migrations
Create Table (usually used in the up() method)
// This uses the short form for references
return queryInterface.createTable(, {
: {
type: Sequelize.,
allowNull: ,
unique: ,
references: { model: }, // This is the plural table name
// that the column references.
}
});
// This the longer form for references that is less confusing
return queryInterface.createTable(, {
: {
type: Sequelize.,
allowNull: ,
unique: ,
references: {
model: {
tableName: // This is the plural table name
}
}
}
});
Delete Table (usually used in the down() function)
return queryInterface.dropTable();
Adding a column
return queryInteface.addColumn(, : {
type: Sequelize.,
allowNull: ,
unique: ,
references: { model: }, // This is the plural table name
// that the column references.
});
Removing a column
return queryInterface.removeColumn(, );
Model Associations
One to One between Student and Scholarship
student.js
Student.hasOne(models.Scholarship, { foreignKey: 'studentId' });
scholarship.js
Scholarship.belongsTo(models.Student, { foreignKey: 'studentId' });
One to Many between Student and Class
student.js
Student.belongsTo(models.Class, { foreignKey: 'classId' });
class.js
Class.hasMany(models.Student, { foreignKey: 'classId' });
Many to Many between Student and Lesson through StudentLessons table
student.js
const columnMapping = {
through: 'StudentLesson', // This is the model name referencing the join table.
otherKey: 'lessonId',
foreignKey: 'studentId'
}
Student.belongsToMany(models.Lesson, columnMapping);
lesson.js
const columnMapping = {
through: 'StudentLesson', // This is the model name referencing the join table.
otherKey: 'studentId',
foreignKey: 'lessonId'
}
Lesson.belongsToMany(models.Student, columnMapping);
Inserting a new item
// Way 1 - With build and save
const pet = Pet.build({
name: "Fido",
petTypeId: 1
});
await pet.save();
// Way 2 - With create
const pet = await Pet.create({
name: "Fido",
petTypeId: 1
});
Updating an item
// Find the pet with id = 1
const pet = await Pet.findByPk(1);
// Way 1
pet.name = "Fido, Sr."
await pet.save;
// Way 2
await pet.update({
name: "Fido, Sr."
});
Deleting a single item
// Find the pet with id = 1
const pet = await Pet.findByPk(1);
// Notice this is an instance method
pet.destroy();
Deleting multiple items
// Notice this is a static class method
await Pet.destroy({
where: {
petTypeId: 1 // Destorys all the pets where the petType is 1
}
});
Query Format
findOne
await .findOne({
where: {
: {
[Op.]:
}
},
});
findAll
await .findAll({
where: {
: {
[Op.]:
}
},
include: ,
offset: 10,
limit: 2
});
findByPk
await .findByPk(, {
include:
});
Eager loading associations with include
Simple include of one related model.
await Pet.findByPk(1, {
include: PetType
})
Include can take an array of models if you need to include more than one.
await Pet.findByPk(1, {
include: [Pet, Owner]
})
Include can also take an object with keys model
and include
.
This is in case you have nested associations.
In this case Owner doesn't have an association with PetType, but
Pet does, so we want to include PetType onto the Pet Model.
await Owner.findByPk(1, {
include: {
model: Pet
include: PetType
}
});
toJSON method
The confusingly named toJSON() method does not return a JSON string but instead
returns a POJO for the instance.
// pet is an instance of the Pet class
const pet = await Pet.findByPk(1);
console.log(pet) // prints a giant object with
// tons of properties and methods
// petPOJO is now just a plain old Javascript Object
const petPOJO = pet.toJSON();
console.log(petPOJO); // { name: "Fido", petTypeId: 1 }
Common Where Operators
const Op = Sequelize.Op
[Op.and]: [{a: 5}, {b: 6}] // (a = 5) AND (b = 6)
[Op.or]: [{a: 5}, {a: 6}] // (a = 5 OR a = 6)
[Op.gt]: 6, // > 6
[Op.gte]: 6, // >= 6
[Op.lt]: 10, // < 10
[Op.lte]: 10, // <= 10
[Op.ne]: 20, // != 20
[Op.eq]: 3, // = 3
[Op.is]: null // IS NULL
[Op.not]: true, // IS NOT TRUE
[Op.between]: [6, 10], // BETWEEN 6 AND 10
[Op.notBetween]: [11, 15], // NOT BETWEEN 11 AND 15
[Op.in]: [1, 2], // IN [1, 2]
[Op.notIn]: [1, 2], // NOT IN [1, 2]
[Op.like]: '%hat', // LIKE '%hat'
[Op.notLike]: '%hat' // NOT LIKE '%hat'
[Op.iLike]: '%hat' // ILIKE '%hat' (case insensitive) (PG only)
[Op.notILike]: '%hat' // NOT ILIKE '%hat' (PG only)
[Op.startsWith]: 'hat' // LIKE 'hat%'
[Op.endsWith]: 'hat' // LIKE '%hat'
[Op.substring]: 'hat' // LIKE '%hat%'
[Op.regexp]: '^[h|a|t]' // REGEXP/~ '^[h|a|t]' (MySQL/PG only)
[Op.notRegexp]: '^[h|a|t]' // NOT REGEXP/!~ '^[h|a|t]' (MySQL/PG only)
[Op.iRegexp]: '^[h|a|t]' // ~* '^[h|a|t]' (PG only)
[Op.notIRegexp]: '^[h|a|t]' // !~* '^[h|a|t]' (PG only)
[Op.like]: { [Op.any]: ['cat', 'hat']}
Accessing the Data
You can access and query the data using the findByPk
, findOne
, and findAll
methods. First, make sure you import the models in your JavaScript file. In this case, we are assuming your JavaScript file is in the root of your project and so is the models folder.
const { Recipe, Ingredient, Instruction, MeasurementUnit } = require('./models');
The models folder exports each of the models that you have created. We have these four in our data model, so we will destructure the models to access each table individually. The associations that you have defined in each of your models will allow you to access data of related tables when you query your database using the include
option.
If you want to find all recipes, for the recipe list, you would use the findAll
method. You need to await this, so make sure your function is async.
async function findAllRecipes() {
return await Recipe.findAll();
}
If you would like to include all the ingredients so you can create a shopping list for all the recipes, you would use include
. This is possible because of the association you have defined in your Recipe and Ingredient models.
async function getShoppingList() {
return await Recipe.findAll({ include: [ Ingredient ] });
}
If you only want to find one where there is chicken in the ingredients list, you would use findOne
and findByPk
.
async function findAChickenRecipe() {
const chickenRecipe = await Ingredient.findOne({
where: {
foodStuff: 'chicken'
}
});
return await Recipe.findByPk(chickenRecipe.recipeId);
}
Data Access to Create/Update/Delete Rows
You have two options when you want to create a row in a table (where you are saving one record into the table). You can either .build
the row and then .save
it, or you can .create
it. Either way it does the same thing. Here are some examples:
Let’s say we have a form that accepts the name of the recipe (for simplicity). When we get the results of the form, we can:
const newRecipe = await Recipe.build({ title: 'Chicken Noodle Soup' });
await newRecipe.save();
This just created our new recipe and added it to our Recipes table. You can do the same thing like this:
await Recipe.create({ title: 'Chicken Noodle Soup' });
If you want to modify an item in your table, you can use update
. Let's say we want to change the chicken noodle soup to chicken noodle soup with extra veggies, first we need to get the recipe, then we can update it.
const modRecipe = await Recipe.findOne({ where: { title: 'Chicken Noodle Soup' } });
await modRecipe.update({ title: 'Chicken Noodle Soup with Extra Veggies' });
To delete an item from your table, you will do the same kind of process. Find the recipe you want to delete and destroy
it, like this:
const deleteThis = await Recipe.findOne({ where: { title: 'Chicken Noodle Soup with Extra Veggies' } });
await deleteThis.destroy();
NOTE: If you do not await these, you will receive a promise, so you will need to use .then
and .catch
to do more with the items you are accessing and modifying.
Documentation
For the data types and validations in your models, here are the official docs. The sequelize docs are hard to look at, so these are the specific sections with just the lists:
Sequelize Data Types: https://sequelize.org/v5/manual/data-types.html
Validations: https://sequelize.org/v5/manual/models-definition.html#validations
When you access the data in your queries, here are the operators available, again because the docs are hard to navigate, this is the specific section with the list of operators.
Operators: https://sequelize.org/v5/manual/querying.html#operators
The documentation for building, saving, creating, updating and destroying is linked here, it does a pretty good job of explaining in my opinion, it just has a title that we have not been using in this course. When they talk about an instance, they mean an item stored in your table.
Create/Update/Destroy: https://sequelize.org/v5/manual/instances.html
If you found this guide helpful feel free to checkout my GitHub/gists where I host similar content:
bgoonz’s gists
There are tons of learning material on the Web The Front-End Checklist is an exhaustive list of all elements you need…gist.github.com
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Or Checkout my personal Resource Site:
(Under construction… may be broken at any time)
a/A-Student-Resources
Edit descriptiongoofy-euclid-1cd736.netlify.app
By Bryan Guner on March 13, 2021.
Exported from Medium on August 24, 2021.
Express APIs Part 2:
Overview
Express APIs Part 2:
Overview
REST
is a generally agreed-upon set of principles and constraints. They are recommendations, not a standard.
// inside /api/apiRoutes.js <- this can be place anywhere and called anything
const express = require('express');
// if the other routers are not nested inside /api then the paths would change
const userRoutes = require('./users/userRoutes');
const productRoutes = require('./products/productRoutes');
const clientRoutes = require('./clients/clientRoutes');
const router = express.Router(); // notice the Uppercase R
// this file will only be used when the route begins with "/api"
// so we can remove that from the URLs, so "/api/users" becomes simply "/users"
router.use('/users', userRoutes);
router.use('/products', productRoutes);
router.use('/clients', clientRoutes);
// .. and any other endpoint related to the user's resource
// after the route has been fully configured, we then export it so it can be required where needed
module.exports = router; // standard convention dictates that this is the last line on the file
### Objective 1 — explain the role of a foreign key
Overview
Foreign keys are a type of table field used for creating links between tables. Like primary keys, they are most often integers that identify (rather than store) data. However, whereas a primary key is used to id rows in a table, foreign keys are used to connect a record in one table to a record in a second table.
Follow Along
Consider the following farms
and ranchers
tables.
The farm_id
in the ranchers
table is an example of a foreign key
. Each entry in the farm_id
(foreign key) column corresponds to an id
(primary key) in the farms
table. This allows us to track which farm each rancher belongs to while keeping the tables normalized.
If we could only see the ranchers
table, we would know that John, Jane, and Jen all work together and that Jim and Jay also work together. However, to know where any of them work, we would need to look at the farms
table.
Challenge
Open SQLTryIT (Links to an external site.).
How many records in the products table belong to the category “confections”?
Objective 2 — query data from multiple tables
Now that we understand the basics of querying data from a single table, let’s move on to selecting data from multiple tables using JOIN operations.
Overview
We can use a JOIN
to combine query data from multiple tables using a single SELECT
statement.
There are different types of joins; some are listed below:
- inner joins.
- outer joins.
- left joins.
- right joins.
- cross joins.
- non-equality joins.
- self joins.
Using joins
requires that the two tables of interest contain at least one field with shared information. For example, if a departments table has an id field, and an employee table has a department_id field, and the values that exist in the id column of the departments table live in the department_id field of the employee table, we can use those fields to join both tables like so:
select * from employees
join departments on employees.department_id = departments.id
This query will return the data from both tables for every instance where the ON
condition is true. If there are employees with no value for department*id or where the value stored in the field does not correspond to an existing id in the* departments table, then that record will NOT be returned. In a similar fashion, any records from the departments table that don't have an employee associated with them will also be omitted from the results. Basically, if the id* does not show as the value of department_id for an employee, it won't be able to join.
We can shorten the condition by giving the table names an alias. This is a common practice. Below is the same example using aliases, picking which fields to return and sorting the results:
select d.id, d.name, e.id, e.first_name, e.last_name, e.salary
from employees as e
join departments as d
on e.department_id = d.id
order by d.name, e.last_name
Notice that we can take advantage of white space and indentation to make queries more readable.
There are several ways of writing joins, but the one shown here should work on all database management systems and avoid some pitfalls, so we recommend it.
The syntax for performing a similar join using Knex is as follows:
db('employees as e')
.join('departments as d', 'e.department_id', 'd.id')
.select('d.id', 'd.name', 'e.first_name', 'e.last_name', 'e.salary')
Follow Along
A good explanation of how the different types of joins can be seen in this article from w3resource.com (Links to an external site.).
Challenge
Use this online tool (Links to an external site.) to write the following queries:
- list the products, including their category name.
- list the products, including the supplier name.
- list the products, including both the category name and supplier name.
What is SQL Joins?
An SQL JOIN clause combines rows from two or more tables. It creates a set of rows in a temporary table.
How to Join two tables in SQL?
A JOIN works on two or more tables if they have at least one common field and have a relationship between them.
JOIN keeps the base tables (structure and data) unchanged.
Join vs. Subquery
- JOINs are faster than a subquery and it is very rare that the opposite.
- In JOINs the RDBMS calculates an execution plan, that can predict, what data should be loaded and how much it will take to processed and as a result this process save some times, unlike the subquery there is no pre-process calculation and run all the queries and load all their data to do the processing.
- A JOIN is checked conditions first and then put it into table and displays; where as a subquery take separate temp table internally and checking condition.
- When joins are using, there should be connection between two or more than two tables and each table has a relation with other while subquery means query inside another query, has no need to relation, it works on columns and conditions.
SQL JOINS: EQUI JOIN and NON EQUI JOIN
The are two types of SQL JOINS — EQUI JOIN and NON EQUI JOIN
- SQL EQUI JOIN :
The SQL EQUI JOIN is a simple SQL join uses the equal sign(=) as the comparison operator for the condition. It has two types — SQL Outer join and SQL Inner join.
- SQL NON EQUI JOIN :
The SQL NON EQUI JOIN is a join uses comparison operator other than the equal sign like >, <, >=, <= with the condition.
SQL EQUI JOIN : INNER JOIN and OUTER JOIN
The SQL EQUI JOIN can be classified into two types — INNER JOIN and OUTER JOIN
- SQL INNER JOIN
This type of EQUI JOIN returns all rows from tables where the key record of one table is equal to the key records of another table.
- SQL OUTER JOIN
This type of EQUI JOIN returns all rows from one table and only those rows from the secondary table where the joined condition is satisfying i.e. the columns are equal in both tables.
In order to perform a JOIN query, the required information we need are:
a) The name of the tables*b)* Name of the columns of two or more tables, based on which a condition will perform.
Syntax:
FROM table1
join_type table2
[ON (join_condition)]
Parameters:
Pictorial Presentation of SQL Joins:
Sample table: company
Sample table: foods
To join two tables ‘company’ and ‘foods’, the following SQL statement can be used :
SQL Code:
SELECT company.company_id,company.company_name,
foods.item_id,foods.item_name
FROM company,foods;
Copy
Output:
COMPAN COMPANY_NAME ITEM_ID ITEM_NAME
------ ------------------------- -------- ---------------
18 Order All 1 Chex Mix
18 Order All 6 Cheez-It
18 Order All 2 BN Biscuit
18 Order All 3 Mighty Munch
18 Order All 4 Pot Rice
18 Order All 5 Jaffa Cakes
18 Order All 7 Salt n Shake
15 Jack Hill Ltd 1 Chex Mix
15 Jack Hill Ltd 6 Cheez-It
15 Jack Hill Ltd 2 BN Biscuit
15 Jack Hill Ltd 3 Mighty Munch
15 Jack Hill Ltd 4 Pot Rice
15 Jack Hill Ltd 5 Jaffa Cakes
15 Jack Hill Ltd 7 Salt n Shake
16 Akas Foods 1 Chex Mix
16 Akas Foods 6 Cheez-It
16 Akas Foods 2 BN Biscuit
16 Akas Foods 3 Mighty Munch
16 Akas Foods 4 Pot Rice
16 Akas Foods 5 Jaffa Cakes
16 Akas Foods 7 Salt n Shake
.........
.........
.........
JOINS: Relational Databases
Key points to remember:
Click on the following to get the slides presentation -
Practice SQL Exercises
- SQL Exercises, Practice, Solution
- SQL Retrieve data from tables [33 Exercises]
- SQL Boolean and Relational operators [12 Exercises]
- SQL Wildcard and Special operators [22 Exercises]
- SQL Aggregate Functions [25 Exercises]
- SQL Formatting query output [10 Exercises]
- SQL Quering on Multiple Tables [7 Exercises]
- FILTERING and SORTING on HR Database [38 Exercises]
- SQL JOINS
- SQL JOINS [29 Exercises]
- SQL JOINS on HR Database [27 Exercises]
- SQL SUBQUERIES
- SQL SUBQUERIES [39 Exercises]
- SQL SUBQUERIES on HR Database [55 Exercises]
- SQL Union[9 Exercises]
- SQL View[16 Exercises]
- SQL User Account Management [16 Exercise]
- Movie Database
- BASIC queries on movie Database [10 Exercises]
- SUBQUERIES on movie Database [16 Exercises]
- JOINS on movie Database [24 Exercises]
- Soccer Database
- Introduction
- BASIC queries on soccer Database [29 Exercises]
- SUBQUERIES on soccer Database [33 Exercises]
- JOINS queries on soccer Database [61 Exercises]
- Hospital Database
- Introduction
- BASIC, SUBQUERIES, and JOINS [39 Exercises]
- Employee Database
- BASIC queries on employee Database [115 Exercises]
- SUBQUERIES on employee Database [77 Exercises]
- More to come!
Objective 3 — write database access methods
Overview
While we can write database code directly into our endpoints, best practices dictate that all database logic exists in separate, modular methods. These files containing database access helpers are often called models
Follow Along
To handle CRUD operations for a single resource, we would want to create a model (or database access file) containing the following methods:
function find() {
}
function findById(id) {
}
function add(user) {
}
function update(changes, id) {
}
function remove(id) {
}
Each of these functions would use Knex logic to perform the necessary database operation.
function find() {
return db('users');
}
For each method, we can choose what value to return. For example, we may prefer findById()
to return a single user
object rather than an array.
function findById(id) {
// first() returns the first entry in the db matching the query
return db('users').where({ id }).first();
}
We can also use existing methods like findById()
to help add()
return the new user (instead of just the id).
function add(user) {
db('users').insert(user)
.then(ids => {
return findById(ids[0]);
});
}
Once all methods are written as desired, we can export them like so:
module.exports = {
find,
findById,
add,
update,
delete,
}
…and use the helpers in our endpoints
const User = require('./user-model.js');
router.get('/', (req, res) => {
User.find()
.then(users => {
res.json(users);
})
.catch( err => {});
});
There should no be knex
code in the endpoints themselves.
### A database is a collection of data organized for easy retrieval and manipulation.
We’re concerned only with digital databases, those that run on computers or other electronic devices. Digital databases have been around since the 1960s. Relational databases, those which store “related” data, are the oldest and most common type of database in use today.
Data Persistence
A database is often necessary because our application or code requires data persistence. This term refers to data that is infrequently accessed and not likely to be modified. In less technical terms, the information will be safely stored and remain untouched unless intentionally modified.
A familiar example of non-persistent data would be JavaScript objects and arrays, which reset each time the code runs.
Relational Databases
In relational databases, the data is stored in tabular format grouped into rows and columns (similar to spreadsheets). A collection of rows is called a table. Each row represents a single record in the table and is made up of one or more columns.
These kinds of databases are relational because a relation is a mathematical idea equivalent to a table. So relational databases are databases that store their data in tables.
Tables
Below are some basic facts about tables:
> Tables organize data in rows and columns.
> Each row in a table represents one distinct record.
> Each column represents a field or attribute that is common to all the records.
> Fields should have a descriptive name and a data type appropriate for the attribute it represents.
> Tables usually have more rows than columns.
> Tables have primary keys that uniquely identify each row.
> Foreign keys represent the relationships with other tables.
SQL is a standard language, which means that it almost certainly will be supported, no matter how your database is managed. That said, be aware that the SQL language can vary depending on database management tools. This lesson focuses on a set of core commands that never change. Learning the standard commands is an excellent introduction since the knowledge transfers between database products.
The syntax for SQL is English-like and requires fewer symbols than programming languages like C, Java, and JavaScript.
It is declarative and concise, which means there is a lot less to learn to use it effectively.
When learning SQL, it is helpful to understand that each command is designed for a different purpose. If we classify the commands by purpose, we’ll end up with the following sub-categories of SQL:
- Data Definition Language (DDL): used to modify database objects. Some examples are:
CREATE TABLE
,ALTER TABLE
, andDROP TABLE
. - Data Manipulation Language (DML): used to manipulate the data stored in the database. Some examples are:
INSERT
,UPDATE
, andDELETE
. - Data Query Language (DQL): used to ask questions about the data stored in the database. The most commonly used SQL command is
SELECT
, and it falls in this category. - Data Control Language (DCL): used to manage database security and user’s access to data. These commands fall into the realm of Database Administrators. Some examples are
GRANT
andREVOKE
. - Transaction Control Commands: used for managing groups of statements that must execute as a unit or not execute at all. Examples are
COMMIT
andROLLBACK
.
As a developer, you’ll need to get familiar with DDL and become proficient using DML and DQL. This lesson will cover only DML and DQL commands.
The four SQL operations covered in this section will allow a user to query, insert, and modify a database table.
Query
A query is a SQL statement used to retrieve data from a database. The command used to write queries is SELECT
, and it is one of the most commonly used SQL commands.
The basic syntax for a SELECT
statement is this:
select from <table name="">;
To see all the fields on a table, we can use a *
as the selection
.
select * from employees;
The preceding statement would show all the records and all the columns for each record in the employees
table.
To pick the fields we want to see, we use a comma-separated list:
select first_name, last_name, salary from employees;
The return of that statement would hold all records from the listed fields.
We can extend the SELECT
command's capabilities using clauses
for things like filtering, sorting, pagination, and more.
It is possible to query multiple tables in a single query. But, in this section, we only perform queries on a single table. We will cover performing queries on multiple tables in another section.
Insert
To insert new data into a table, we’ll use the INSERT
command. The basic syntax for an INSERT
statement is this:
insert into <table name=""> () values ()
Using this formula, we can specify which values will be inserted into which fields like so:
insert into Customers (Country, CustomerName, ContactName, Address, City, PostalCode)
values ('USA', 'WebDev School', 'Austen Allred', '1 WebDev Court', 'Provo', '84601');
Modify
Modifying a database consists of updating and removing records. For these operations, we’ll use UPDATE
and DELETE
commands, respectively.
The basic syntax for an UPDATE
statement is:
update <table name=""> set = where ;
The basic syntax for a DELETE
statement is:
delete from <table name=""> where ;
### Filtering results using WHERE clause
When querying a database, the default result will be every entry in the given table. However, often, we are looking for a specific record or a set of records that meets certain criteria.
A WHERE
clause can help in both cases.
Here’s an example where we might only want to find customers living in Berlin.
select City, CustomerName, ContactName
from Customers
where City = 'Berlin'
We can also chain together WHERE
clauses using OR
and AND
to limit our results further.
The following query includes only records that match both criteria.
select City, CustomerName, ContactName
from Customers
where Country = 'France' and City = 'Paris'
And this query includes records that match either criteria.
select City, CustomerName, ContactName
from Customers
where Country = 'France' or City = 'Paris'
These operators can be combined and grouped with parentheses to add complex selection logic. They behave similarly to what you’re used to in programming languages.
You can read more about SQLite operators from w3resource (Links to an external site.).
To select a single record, we can use a WHERE
statement with a uniquely identifying field, like an id:
select * from Customers
where CustomerId=3;
Other comparison operators also work in WHERE
conditions, such as >
, <
, <=
, and >=
.
select * from employees where salary >= 50000
Ordering results using the ORDER BY clause
Query results are shown in the same order the data was inserted. To control how the data is sorted, we can use the ORDER BY
clause. Let's see an example.
-- sorts the results first by salary in descending order, then by the last name in ascending order
select * from employees order by salary desc, last_name;
We can pass a list of field names to order by
and optionally choose asc
or desc
for the sort direction. The default is asc
, so it doesn't need to be specified.
Some SQL engines also support using field abbreviations when sorting.
select name, salary, department from employees order by 3, 2 desc;
In this case, the results are sorted by the department in ascending order first and then by salary in descending order. The numbers refer to the fields’ position in the selection portion of the query, so 1
would be name, 2
would be salary, and so on.
Note that the WHERE
clause should come after the FROM
clause. The ORDER BY
clause always goes last.
select * from employees where salary > 50000 order by last_name;
Limiting results using the LIMIT clause
When we wish to see only a limited number of records, we can use a LIMIT
clause.
The following returns the first ten records in the products table:
select * from products
limit 10
LIMIT
clauses are often used in conjunction with ORDER BY
. The following shows us the five cheapest products:
select * from products
order by price desc
limit 5
Inserting data using INSERT
An insert statement adds a new record to the database. All non-null fields must be listed out in the same order as their values. Some fields, like ids and timestamps, may be auto-generated and do not need to be included in an INSERT
statement.
-- we can add fields in any order; the values need to be in the same ordinal position
-- the id will be assigned automatically
insert into Customers (Country, CustomerName, ContactName, Address, City, PostalCode)
values ('USA', 'WebDev School', 'Austen Allred', '1 WebDev Court', 'Provo', '84601');
The values in an insert statement must not violate any restrictions and constraints that the database has in place, such as expected datatypes. We will learn more about constraints and schema design in a later section.
Modifying recording using UPDATE
When modifying a record, we identify a single record or a set of records to update using a WHERE
clause. Then we can set the new value(s) in place.
update Customers
set City = 'Silicon Valley', Country = 'USA'
where CustomerName = 'WebDev School'
Technically the WHERE
clause is not required, but leaving it off would result in every record within the table receiving the update.
Removing records using DELETE
When removing a record or set of records, we need only identify which record(s) to remove using a WHERE
clause:
delete from Customers
where CustomerName = 'WebDev School`;
Once again, the WHERE
clause is not required, but leaving it off would remove every record in the table, so it's essential.
Raw SQL is a critical baseline skill. However, Node developers generally use an Object Relational Mapper (ORM) or query builder to write database commands in a backend codebase. Both ORMs and query builders are JavaScript libraries that allow us to interface with the database using a JavaScript version of the SQL language.
For example, instead of a raw SQL SELECT
:
SELECT * FROM users;
We could use a query builder to write the same logic in JavaScript:
db.select('*').from('users');
Query builders are lightweight and easy to get off the ground, whereas ORMs use an object-oriented model and provide more heavy lifting within their rigid structure.
We will use a query builder called knex.js (Links to an external site.).
To use Knex in a repository, we’ll need to add two libraries:
npm install knex sqlite3
knex
is our query builder library, and sqlite3
allows us to interface with a sqlite
database. We'll learn more about sqlite
and other database management systems in the following module. For now, know that you need both libraries.
Next, we use Knex to set up a config file:
const knex = require('knex');
const config = {
client: 'sqlite3',
connection: {
filename: './data/posts.db3',
},
useNullAsDefault: true,
};
module.exports = knex(config);
To use the query builder elsewhere in our code, we need to call knex
and pass in a config
object. We'll be discussing Knex configuration more in a future module. Still, we only need the client
, connection
, and useNullAsDefault
keys as shown above. The filename
should point towards the pre-existing database file, which can be recognized by the .db3
extension.
GOTCHA: The file path to the database should be with respect to the root of the repo, not the configuration file itself.
Once Knex is configured, we can import the above config file anywhere in our codebase to access the database.
const db = require('../data/db-config.js);
The db
object provides methods that allow us to begin building queries.
SELECT using Knex
In Knex, the equivalent of SELECT * FROM users
is:
db.select('*').from('users');
There’s a simpler way to write the same command:
db('users');
Using this, we could write a GET
endpoint.
router.get('/api/users', (req, res) => {
db('users')
.then(users => {
res.json(users);
})
.catch (err => {
res.status(500).json({ message: 'Failed to get users' });
});
});
NOTE: All Knex queries return promises.
Knex also allows for a where clause. In Knex, we could write SELECT * FROM users WHERE id=1
as
db('users').where({ id: 1 });
This method will resolve to an array containing a single entry like so: [{ id: 1, name: 'bill' }]
.
Using this, we might add a GET
endpoint where a specific user:
server.get('api/users/:id', (req, res) => {
const { id } = req.params;
db('users').where({ id })
.then(users => {
// we must check the length to find our if our user exists
if (users.length) {
res.json(users);
} else {
res.status(404).json({ message: 'Could not find user with given id.' })
})
.catch (err => {
res.status(500).json({ message: 'Failed to get user' });
});
});
INSERT using Knex
In Knex, the equivalent of INSERT INTO users (name, age) VALUES ('Eva', 32)
is:
db('users').insert({ name: 'Eva', age: 32 });
The insert method in Knex will resolve to an array containing the newly created id for that user like so: [3]
.
UPDATE using Knex
In knex, the equivalent of UPDATE users SET name='Ava', age=33 WHERE id=3;
is:
db('users').where({ id: 3 })
.update({name: 'Ava', age: 33 });
Note that the where
method comes before update
, unlike in SQL.
Update will resolve to a count of rows updated.
DELETE using Knex
In Knex, the equivalent of DELETE FROM users WHERE age=33;
is:
db('users').where({ age: 33}).del();
Once again, the where
must come before the del
. This method will resolve to a count of records removed.
Here’s a small project you can practice with.
SQLlite Studio is an application that allows us to create, open, view, and modify SQLite databases. To fully understand what SQLite Studio is and how it works, we must also understand the concept of the Database Management Systems (DBMS).
What is a DBMS?
To manage digital databases we use specialized software called Data*Base **Management **Systems (DBMS). These systems typically run on servers and are managed by **DataBase **A*dministrators (DBAs).
In less technical terms, we need a type of software that will allow us to create, access, and generally manage our databases. In the world of relational databases, we specifically use Relational Database Mangement Systems (RDBMs). Some examples are Postgres, SQLite, MySQL, and Oracle.
Choosing a DBMS determines everything from how you set up your database, to where and how the data is stored, to what SQL commands you can use. Most systems share the core of the SQL language that you’ve already learned.
In other words, you can expect SELECT
, UPDATE
, INSERT
, WHERE
, and the like to be the same across all DBMSs, but the subtleties of the language may vary.
What is SQLite?
SQLite is the DBMS, as the name suggests, it is a more lightweight system and thus easier to get set up than some others.
SQLite allows us to store databases as single files. SQLite projects have a .db3
extension. That is the database.
SQLite is not a database (like relational, graph, or document are databases) but rather a database management system.
Opening an existing database in SQLite Studio
One useful visual interface we might use with a SQLite database is called SQLite Studio. Install SQLITE Studio here. (Links to an external site.)
Once installed, we can use SQLite Studio to open any .db3
file from a previous lesson. We may view the tables, view the data, and even make changes to the database.
For a more detailed look at SQLite Studio, follow along in the video above.
A database schema is the shape of our database. It defines what tables we’ll have, which columns should exist within the tables and any restrictions on each column.
A well-designed database schema keeps the data well organized and can help ensure high-quality data.
Note that while schema design is usually left to Database Administrators (DBAs), understanding schema helps when designing APIs and database logic. And in a smaller team, this step may fall on the developer.
For a look at schema design in SQLite Studio, follow along in the video above.
When designing a single table, we need to ask three things:
- What fields (or columns) are present?
- What type of data do we expect for each field?
- Are there other restrictions needed for each column?
Looking at the following schema diagram for an accounts
table, we can the answer to each other those questions:
Table Fields
Choosing which fields to include in a table is relatively straight forward. What information needs to be tracked regarding this resource? In the real world, this is determined by the intended use of the product or app.
However, this is one requirement every table should satisfy: a primary key. A primary key is a way to identify each entry in the database uniquely. It is most often represented as a auto-incrementing integer called id
or [tablename]Id
.
Datatypes
Each field must also have a specified datatype. The datatype available depends on our DBMS. Some supported datatype in SQLite include:
- Null: Missing or unknown information.
- Integer: Whole numbers.
- Real: Any number, including decimals.
- Text: Character data.
- Blob: a large binary object that can be used to store miscellaneous data.
Any data inserted into the table must match the datatypes determined in schema design.
Constraints
Beyond datatypes, we may add additional constraints on each field. Some examples include:
- Not Null: The field cannot be left empty
- Unique: No two records can have the same value in this field
- Primary key: — Indicates this field is the primary key. Both the not null and unique constraints will be enforced.
- Default: — Sets a default value if none is provided.
As with data types, any data that does not satisfy the schema constraints will be rejected from the database.
Another critical component of schema design is to understand how different tables relate to each other. This will be covered in later lesson.
Knex provides a schema builder, which allows us to write code to design our database schema. However, beyond thinking about columns and constraints, we must also consider updates.
When a schema needs to be updated, a developer must feel confident that the changes go into effect everywhere. This means schema updates on the developer’s local machine, on any testing or staging versions, on the production database, and then on any other developer’s local machines. This is where migrations come into play.
A database migration
describes changes made to the structure of a database. Migrations include things like adding new objects, adding new tables, and modifying existing objects or tables.
To use migrations (and to make Knex setup easier), we need to use knex cli. Install knex globally with npm install -g knex
.
This allows you to use Knex commands within any repo that has knex
as a local dependency. If you have any issues with this global install, you can use the npx knex
command instead.
Initializing Knex
To start, add the knex
and sqlite3
libraries to your repository.
npm install knex sqlite3
We’ve seen how to use manually create a config object to get started with Knex, but the best practice is to use the following command:
knex init
Or, if Knex isn’t globally installed:
npx knex init
This command will generate a file in your root folder called knexfile.js
. It will be auto populated with three config objects, based on different environments. We can delete all except for the development object.
module.exports = {
development: {
client: 'sqlite3',
connection: {
filename: './dev.sqlite3'
}
}
};
We’ll need to update the location (or desired location) of the database as well as add the useNullAsDefault
option. The latter option prevents crashes when working with sqlite3
.
module.exports = {
development: {
// our DBMS driver
client: 'sqlite3',
// the location of our db
connection: {
filename: './data/database_file.db3',
},
// necessary when using sqlite3
useNullAsDefault: true
}
};
Now, wherever we configure our database, we may use the following syntax instead of hardcoding in a config object.
const knex = require('knex');
const config = require('../knexfile.js');
// we must select the development object from our knexfile
const db = knex(config.development);
// export for use in codebase
module.exports = db;
Knex Migrations
Once our knexfile
is set up, we can begin creating migrations. Though it's not required, we are going to add an addition
option to the config object to specify a directory for the migration files.
development: {
client: 'sqlite3',
connection: {
filename: './data/produce.db3',
},
useNullAsDefault: true,
// generates migration files in a data/migrations/ folder
migrations: {
directory: './data/migrations'
}
}
We can generate a new migration with the following command:
knex migrate:make [migration-name]
If we needed to create an accounts table, we might run:
knex migrate:make create-accounts
Note that inside data/migrations/
a new file has appeared. Migrations have a timestamp in their filenames automatically. Wither you like this or not, do not edit migration names.
The migration file should have both an up
and a down
function. Within the up
function, we write the ended database changes. Within the down
function, we write the code to undo the up
functions. This allows us to undo any changes made to the schema if necessary.
exports.up = function(knex, Promise) {
// don't forget the return statement
return knex.schema.createTable('accounts', tbl => {
// creates a primary key called id
tbl.increments();
// creates a text field called name which is both required and unique
tbl.text('name', 128).unique().notNullable();
// creates a numeric field called budget which is required
tbl.decimal('budget').notNullable();
});
};
exports.down = function(knex, Promise) {
// drops the entire table
return knex.schema.dropTableIfExists('accounts');
};
References for these methods are found in the schema builder section of the Knex docs (Links to an external site.).
At this point, the table is not yet created. To run this (and any other) migrations, use the command:
knex migrate:latest
Note if the database does not exist, this command will auto-generate one. We can use SQLite Studio to confirm that the accounts table has been created.
Changes and Rollbacks
If later down the road, we realize you need to update your schema, you shouldn’t edit the migration file. Instead, you will want to create a new migration with the command:
knex migrate:make accounts-schema-update
Once we’ve written our updates into this file we save and close with:
knex migrate:latest
If we migrate our database and then quickly realize something isn’t right, we can edit the migration file. However, first, we need to rolllback (or undo) our last migration with:
knex migrate:rollback
Finally, we are free to rerun that file with knex migrate
latest.
NOTE: A rollback should not be used to edit an old migration file once that file has accepted into a main branch. However, an entire team may use a rollback to return to a previous version of a database.
Overview
Knex provides a schema builder, which allows us to write code to design our database schema. However, beyond thinking about columns and constraints, we must also consider updates.
When a schema needs to be updated, a developer must feel confident that the changes go into effect everywhere. This means schema updates on the developer’s local machine, on any testing or staging versions, on the production database, and then on any other developer’s local machines. This is where migrations come into play.
A database migration
describes changes made to the structure of a database. Migrations include things like adding new objects, adding new tables, and modifying existing objects or tables.
To use migrations (and to make Knex setup easier), we need to use knex cli. Install knex globally with npm install -g knex
.
This allows you to use Knex commands within any repo that has knex
as a local dependency. If you have any issues with this global install, you can use the npx knex
command instead.
Initializing Knex
To start, add the knex
and sqlite3
libraries to your repository.
npm install knex sqlite3
We’ve seen how to use manually create a config object to get started with Knex, but the best practice is to use the following command:
knex init
Or, if Knex isn’t globally installed:
npx knex init
This command will generate a file in your root folder called knexfile.js
. It will be auto populated with three config objects, based on different environments. We can delete all except for the development object.
module.exports = {
development: {
client: 'sqlite3',
connection: {
filename: './dev.sqlite3'
}
}
};
We’ll need to update the location (or desired location) of the database as well as add the useNullAsDefault
option. The latter option prevents crashes when working with sqlite3
.
module.exports = {
development: {
// our DBMS driver
client: 'sqlite3',
// the location of our db
connection: {
filename: './data/database_file.db3',
},
// necessary when using sqlite3
useNullAsDefault: true
}
};
Now, wherever we configure our database, we may use the following syntax instead of hardcoding in a config object.
const knex = require('knex');
const config = require('../knexfile.js');
// we must select the development object from our knexfile
const db = knex(config.development);
// export for use in codebase
module.exports = db;
Knex Migrations
Once our knexfile
is set up, we can begin creating migrations. Though it's not required, we are going to add an addition
option to the config object to specify a directory for the migration files.
development: {
client: 'sqlite3',
connection: {
filename: './data/produce.db3',
},
useNullAsDefault: true,
// generates migration files in a data/migrations/ folder
migrations: {
directory: './data/migrations'
}
}
We can generate a new migration with the following command:
knex migrate:make [migration-name]
If we needed to create an accounts table, we might run:
knex migrate:make create-accounts
Note that inside data/migrations/
a new file has appeared. Migrations have a timestamp in their filenames automatically. Wither you like this or not, do not edit migration names.
The migration file should have both an up
and a down
function. Within the up
function, we write the ended database changes. Within the down
function, we write the code to undo the up
functions. This allows us to undo any changes made to the schema if necessary.
exports.up = function(knex, Promise) {
// don't forget the return statement
return knex.schema.createTable('accounts', tbl => {
// creates a primary key called id
tbl.increments();
// creates a text field called name which is both required and unique
tbl.text('name', 128).unique().notNullable();
// creates a numeric field called budget which is required
tbl.decimal('budget').notNullable();
});
};
exports.down = function(knex, Promise) {
// drops the entire table
return knex.schema.dropTableIfExists('accounts');
};
References for these methods are found in the schema builder section of the Knex docs (Links to an external site.).
At this point, the table is not yet created. To run this (and any other) migrations, use the command:
knex migrate:latest
Note if the database does not exist, this command will auto-generate one. We can use SQLite Studio to confirm that the accounts table has been created.
Changes and Rollbacks
If later down the road, we realize you need to update your schema, you shouldn’t edit the migration file. Instead, you will want to create a new migration with the command:
knex migrate:make accounts-schema-update
Once we’ve written our updates into this file we save and close with:
knex migrate:latest
If we migrate our database and then quickly realize something isn’t right, we can edit the migration file. However, first, we need to rolllback (or undo) our last migration with:
knex migrate:rollback
Finally, we are free to rerun that file with knex migrate
latest.
NOTE: A rollback should not be used to edit an old migration file once that file has accepted into a main branch. However, an entire team may use a rollback to return to a previous version of a database.
Overview
Often we want to pre-populate our database with sample data for testing. Seeds allow us to add and reset sample data easily.
Follow Along
The Knex command-line tool offers a way to seed our database; in other words, pre-populate our tables.
Similarly to migrations, we want to customize where our seed files are generated using our knexfile
development: {
client: 'sqlite3',
connection: {
filename: './data/produce.db3',
},
useNullAsDefault: true,
// generates migration files in a data/migrations/ folder
migrations: {
directory: './data/migrations'
},
seeds: {
directory: './data/seeds'
}
}
To create a seed run: knex seed:make 001-seedName
Numbering is a good idea because Knex doesn’t attach a timestamp to the name like migrate does. Adding numbers to the file name, we can control the order in which they run.
We want to create seeds for our accounts table:
knex seed:make 001-accounts
A file will appear in the designated seed folder.
exports.seed = function(knex, Promise) {
// we want to remove all data before seeding
// truncate will reset the primary key each time
return knex('accounts').truncate()
.then(function () {
// add data into insert
return knex('accounts').insert([
{ name: 'Stephenson', budget: 10000 },
{ name: 'Gordon & Gale', budget: 40400 },
]);
});
};
Run the seed files by typing:
knex seed:run
You can now use SQLite Studio to confirm that the accounts table has two entries.
Foreign keys are a type of table field used for creating links between tables. Like primary keys, they are most often integers that identify (rather than store) data. However, whereas a primary key is used to id rows in a table, foreign keys are used to connect a record in one table to a record in a second table.
Consider the following farms
and ranchers
tables.
The farm_id
in the ranchers
table is an example of a foreign key
. Each entry in the farm_id
(foreign key) column corresponds to an id
(primary key) in the farms
table. This allows us to track which farm each rancher belongs to while keeping the tables normalized.
If we could only see the ranchers
table, we would know that John, Jane, and Jen all work together and that Jim and Jay also work together. However, to know where any of them work, we would need to look at the farms
table.
Now that we understand the basics of querying data from a single table, let’s move on to selecting data from multiple tables using JOIN operations.
Overview
We can use a JOIN
to combine query data from multiple tables using a single SELECT
statement.
There are different types of joins; some are listed below:
- inner joins.
- outer joins.
- left joins.
- right joins.
- cross joins.
- non-equality joins.
- self joins.
Using joins
requires that the two tables of interest contain at least one field with shared information. For example, if a departments table has an id field, and an employee table has a department_id field, and the values that exist in the id column of the departments table live in the department_id field of the employee table, we can use those fields to join both tables like so:
select * from employees
join departments on employees.department_id = departments.id
This query will return the data from both tables for every instance where the ON
condition is true. If there are employees with no value for department*id or where the value stored in the field does not correspond to an existing id in the* departments table, then that record will NOT be returned. In a similar fashion, any records from the departments table that don't have an employee associated with them will also be omitted from the results. Basically, if the id* does not show as the value of department_id for an employee, it won't be able to join.
We can shorten the condition by giving the table names an alias. This is a common practice. Below is the same example using aliases, picking which fields to return and sorting the results:
select d.id, d.name, e.id, e.first_name, e.last_name, e.salary
from employees as e
join departments as d
on e.department_id = d.id
order by d.name, e.last_name
Notice that we can take advantage of white space and indentation to make queries more readable.
There are several ways of writing joins, but the one shown here should work on all database management systems and avoid some pitfalls, so we recommend it.
The syntax for performing a similar join using Knex is as follows:
db('employees as e')
.join('departments as d', 'e.department_id', 'd.id')
.select('d.id', 'd.name', 'e.first_name', 'e.last_name', 'e.salary')
Follow Along
A good explanation of how the different types of joins can be seen in this article from w3resource.com (Links to an external site.).
What is SQL Joins?
An SQL JOIN clause combines rows from two or more tables. It creates a set of rows in a temporary table.
How to Join two tables in SQL?
A JOIN works on two or more tables if they have at least one common field and have a relationship between them.
JOIN keeps the base tables (structure and data) unchanged.
Join vs. Subquery
- JOINs are faster than a subquery and it is very rare that the opposite.
- In JOINs the RDBMS calculates an execution plan, that can predict, what data should be loaded and how much it will take to processed and as a result this process save some times, unlike the subquery there is no pre-process calculation and run all the queries and load all their data to do the processing.
- A JOIN is checked conditions first and then put it into table and displays; where as a subquery take separate temp table internally and checking condition.
- When joins are using, there should be connection between two or more than two tables and each table has a relation with other while subquery means query inside another query, has no need to relation, it works on columns and conditions.
SQL JOINS: EQUI JOIN and NON EQUI JOIN
The are two types of SQL JOINS — EQUI JOIN and NON EQUI JOIN
- SQL EQUI JOIN :
The SQL EQUI JOIN is a simple SQL join uses the equal sign(=) as the comparison operator for the condition. It has two types — SQL Outer join and SQL Inner join.
- SQL NON EQUI JOIN :
The SQL NON EQUI JOIN is a join uses comparison operator other than the equal sign like >, <, >=, <= with the condition.
SQL EQUI JOIN : INNER JOIN and OUTER JOIN
The SQL EQUI JOIN can be classified into two types — INNER JOIN and OUTER JOIN
- SQL INNER JOIN
This type of EQUI JOIN returns all rows from tables where the key record of one table is equal to the key records of another table.
- SQL OUTER JOIN
This type of EQUI JOIN returns all rows from one table and only those rows from the secondary table where the joined condition is satisfying i.e. the columns are equal in both tables.
In order to perform a JOIN query, the required information we need are:
a) The name of the tables*b)* Name of the columns of two or more tables, based on which a condition will perform.
Syntax:
FROM table1
join_type table2
[ON (join_condition)]
Parameters:
Pictorial Presentation of SQL Joins:
Sample table: company
Sample table: foods
To join two tables ‘company’ and ‘foods’, the following SQL statement can be used :
SQL Code:
SELECT company.company_id,company.company_name,
foods.item_id,foods.item_name
FROM company,foods;
Copy
Output:
COMPAN COMPANY_NAME ITEM_ID ITEM_NAME
------ ------------------------- -------- ---------------
18 Order All 1 Chex Mix
18 Order All 6 Cheez-It
18 Order All 2 BN Biscuit
18 Order All 3 Mighty Munch
18 Order All 4 Pot Rice
18 Order All 5 Jaffa Cakes
18 Order All 7 Salt n Shake
15 Jack Hill Ltd 1 Chex Mix
15 Jack Hill Ltd 6 Cheez-It
15 Jack Hill Ltd 2 BN Biscuit
15 Jack Hill Ltd 3 Mighty Munch
15 Jack Hill Ltd 4 Pot Rice
15 Jack Hill Ltd 5 Jaffa Cakes
15 Jack Hill Ltd 7 Salt n Shake
16 Akas Foods 1 Chex Mix
16 Akas Foods 6 Cheez-It
16 Akas Foods 2 BN Biscuit
16 Akas Foods 3 Mighty Munch
16 Akas Foods 4 Pot Rice
16 Akas Foods 5 Jaffa Cakes
16 Akas Foods 7 Salt n Shake
.........
.........
.........
Overview
While we can write database code directly into our endpoints, best practices dictate that all database logic exists in separate, modular methods. These files containing database access helpers are often called models
Follow Along
To handle CRUD operations for a single resource, we would want to create a model (or database access file) containing the following methods:
function find() {
}
function findById(id) {
}
function add(user) {
}
function update(changes, id) {
}
function remove(id) {
}
Each of these functions would use Knex logic to perform the necessary database operation.
function find() {
return db('users');
}
For each method, we can choose what value to return. For example, we may prefer findById()
to return a single user
object rather than an array.
function findById(id) {
// first() returns the first entry in the db matching the query
return db('users').where({ id }).first();
}
We can also use existing methods like findById()
to help add()
return the new user (instead of just the id).
function add(user) {
db('users').insert(user)
.then(ids => {
return findById(ids[0]);
});
}
Once all methods are written as desired, we can export them like so:
module.exports = {
find,
findById,
add,
update,
delete,
}
…and use the helpers in our endpoints
const User = require('./user-model.js');
router.get('/', (req, res) => {
User.find()
.then(users => {
res.json(users);
})
.catch( err => {});
});
There should no be knex
code in the endpoints themselves.
Normalization
is the process of designing or refactoring database tables for maximum consistency and minimum redundancy.
With objects, we’re used to denormalized data, stored with ease of use and speed in mind. Non-normalized tables are considered ineffective in relational databases.
Data normalization is a deep topic in database design. To begin thinking about it, we’ll explore a few basic guidelines and some data examples that violate these rules.
Normalization Guidelines
> Each record has a primary key.
> No fields are repeated.
> All fields relate directly to the key data.
> Each field entry contains a single data point.
> There are no redundant entries.
Denormalized Data
This table has two issues. There is no proper id field (as multiple farms may have the same name), and multiple fields are representing the same type of data: animals.
While we have now eliminated the first two issues, we now have multiple entries in one field, separated by commas. This isn’t good either, as its another example of denormalization. There is no “array” data type in a relational database, so each field must contain only one data point.
Now we’ve solved the multiple fields issue, but we created repeating data (the farm field), which is also an example of denormalization. As well, we can see that if we were tracking additional ranch information (such as annual revenue), that field is only vaguely related to the animal information.
When these issues begin arising in your schema design, it means that you should separate information into two or more tables.
Anomalies
Obeying the above guidelines prevent anomalies in your database when inserting, updating, or deleting. For example, imagine if the revenue of Beech Ranch changed. With our denormalized schema, it may get updated in some records but not others:
Similarly, if Beech Ranch shut down, there would be three (if not more) records that needed to be deleted to remove a single farm.
Thus a denormalized table opens the door for contradictory, confusing, and unusable data.
What issues does the following table have?
There are three types of relationships:
> One to one.
> One to many.
> Many to many.
Determining how data is related can provide a set of guidelines for table representation and guides the use of foreign keys to connect said tables.
One to One Relationships
Imagine we are storing the financial projections for a series of farms.
We may wish to attach fields like farm name, address, description, projected revenue, and projected expenses. We could divide these fields into two categories: information related to the farm directly (name, address, description) and information related to the financial projections (revenue, expenses).
We would say that farms
and projections
have a one-to-one relationship. This is to say that every farm has exactly one projection, and every project corresponds to exactly one farm.
This data can be represented in two tables: farms
and projections
The farm_id
is the foreign key that links farms
and projections
together.
Notes about one-to-one relationships:
- The foreign key should always have a
unique
constraint to prevent duplicate entries. In the example above, we wouldn't want to allow multiple projections records for one farm. - The foreign key can be in either table. For example, we may have had a
projection_id
in thefarms
table instead. A good rule of thumb is to put the foreign key in whichever table is more auxiliary to the other. - You can represent one-to-one data in a single table without creating anomalies. However, it is sometimes prudent to use two tables as shown above to keep separate concerns in separate tables.
One to Many Relationships
Now imagine, we are storing the full-time ranchers employed at each farm. In this case, each rancher would only work at one farm however, each farm may have multiple ranchers.
This is called a one-to-many relationship.
This is the most common type of relationship between entities. Some other examples:
- One
customer
can have manyorders
. - One
user
can have manyposts
. - One
post
can have manycomments
.
Manage this type of relationship by adding a foreign key on the “many” table of the relationship that points to the primary key on the “one” table. Consider the farms
and ranchers
tables.
In a many-to-many relationship, the foreign key (in this case farm_id
) should not be unique.
Many to Many Relationships
If we want to track animals on a farm as well, we must explore the many-to-many relationship. A farm has multiple animals, and multiple of each type of animal is present at multiple different farms.
Some other examples:
- an
order
can have manyproducts
and the sameproduct
will appear in manyorders
. - a
book
can have more than oneauthor
, and anauthor
can write more than onebook
.
To model this relationship, we need to introduce an intermediary table that holds foreign keys that reference the primary key on the related tables. We now have a farms
, animals
, and farm_animals
table.
While each foreign key on the intermediary table is not unique, the combinations of keys should be unique.
The Knex query builder library also allows us to create multi-table schemas include foreign keys. However, there are a few extra things to keep in mind when designing a multi-table schema, such as using the correct order when creating tables, dropping tables, seeding data, and removing data.
We have to consider the way that delete
and updates
through our API will impact related data.
Foreign Key Setup
In Knex, foreign key restrictions don’t automatically work. Whenever using foreign keys in your schema, add the following code to your knexfile
. This will prevent users from entering bad data into a foreign key column.
development: {
client: 'sqlite3',
useNullAsDefault: true,
connection: {
filename: './data/database.db3',
},
// needed when using foreign keys
pool: {
afterCreate: (conn, done) => {
// runs after a connection is made to the sqlite engine
conn.run('PRAGMA foreign_keys = ON', done); // turn on FK enforcement
},
},
},
Migrations
Let’s look at how we might track our farms
and ranchers
using Knex. In our migration file's up
function, we would want to create two tables:
exports.up = function(knex, Promise) {
return knex.schema
.createTable('farms', tbl => {
tbl.increments();
tbl.string('farm_name', 128)
.notNullable();
})
// we can chain together createTable
.createTable('ranchers', tbl => {
tbl.increments();
tbl.string('rancher_name', 128);
tbl.integer('farm_id')
// forces integer to be positive
.unsigned()
.notNullable()
.references('id')
// this table must exist already
.inTable('farms')
})
};
Note that the foreign key can only be created after the reference table.
In the down function, the opposite is true. We always want to drop a table with a foreign key before dropping the table it references.
exports.down = function(knex, Promise) {
// drop in the opposite order
return knex.schema
.dropTableIfExists('ranchers')
.dropTableIfExists('farms')
};
In the case of a many-to-many relationship, the syntax for creating an intermediary table is identical, except for one additional piece. We need a way to make sure our combination of foreign keys is unique.
.createTable('farm_animals', tbl => {
tbl.integer('farm_id')
.unsigned()
.notNullable()
.references('id')
// this table must exist already
.inTable('farms')
tbl.integer('animal_id')
.unsigned()
.notNullable()
.references('id')
// this table must exist already
.inTable('animals')
// the combination of the two keys becomes our primary key
// will enforce unique combinations of ids
tbl.primary(['farm_id', 'animal_id']);
});
Seeds
Order is also a concern when seeding. We want to create seeds in the same order we created our tables. In other words, don’t create a seed with a foreign key, until that reference record exists.
In our example, make sure to write the 01-farms
seed file and then the 02-ranchers
seed file.
However, we run into a problem with truncating our seeds, because we want to truncate 02-ranchers
before 01-farms
. A library called knex-cleaner
provides an easy solution for us.
Run knex seed:make 00-cleanup
and npm install knex-cleaner
. Inside the cleanup seed, use the following code.
const cleaner = require('knex-cleaner');
exports.seed = function(knex) {
return cleaner.clean(knex, {
mode: 'truncate', // resets ids
ignoreTables: ['knex_migrations', 'knex_migrations_lock'], // don't empty migration tables
});
};
This removes all tables (excluding the two tables that track migrations) in the correct order before any seed files run.
Cascading
If a user attempt to delete a record that is referenced by another record (such as attempting to delete Morton Ranch
when entries in our ranchers
table reference that record), by default, the database will block the action. The same thing can happen when updating a record when a foreign key reference.
If we want that to override this default, we can delete or update with cascade. With cascade, deleting a record also deletes all referencing records, we can set that up in our schema.
.createTable('ranchers', tbl => {
tbl.increments();
tbl.string('rancher_name', 128);
tbl.integer('farm_id')
.unsigned()
.notNullable()
.references('id')
.inTable('farms')
.onUpdate('CASCADE');
.onDelete('CASCADE')
})
Express basic routing
Routing refers to determining how an application responds to a client request to a particular endpoint, which is a URI…expressjs.com
Writing middleware for use in Express apps
Middleware functions are functions that have access to the request object ( req), the response object ( res), and the…expressjs.com
Exported from Medium on August 24, 2021.
Express Quick Sheet
Settings
Express Quick Sheet
Settings
app.set('x', 'yyy')
app.get('x') //=> 'yyy'
app.enable('trust proxy')
app.disable('trust proxy')
app.enabled('trust proxy') //=> true
Enviorment
app.get('env')
Config
app.configure('production', function() {
app.set...
})
Wares
app.use(express.static(__dirname + '/public'))
app.use(express.logger())
Helpers
app.locals({
title: "MyApp",
})
Request & response
Request
// GET /user/tj
req.path //=> "/user/tj"
req.url //=> "/user/tj"
req.xhr //=> true|false
req.method //=> "GET"
req.params
req.params.name //=> "tj"
req.params[0]
// GET /search?q=tobi+ferret
req.query.q // => "tobi ferret"
req.cookies
req.accepted
// [ { value: 'application/json', quality: 1, type: 'application', subtype: 'json' },
// { value: 'text/html', quality: 0.5, type: 'text',subtype: 'html' } ]
req.is('html')
req.is('text/html')
req.headers
req.headers['host']
req.headers['user-agent']
req.headers['accept-encoding']
req.headers['accept-language']
Response
res.redirect('/')
res.redirect(301, '/')
res.set('Content-Type', 'text/html')
res.send('hi')
res.send(200, 'hi')
res.json({ a: 2 })
By Bryan Guner on March 5, 2021.
Exported from Medium on August 24, 2021.
Express-Routing
Note: To read this in a rendered view, open your VS Code Command Palate (using Control+Shift+P on Windows, Command+Shift+P on macOS)
Express-Routing
Note: To read this in a rendered view, open your VS Code Command Palate (using Control+Shift+P on Windows, Command+Shift+P on macOS)
Create an Express application.
- Has a page that shows a list of people
- Has a page that allows you to add a person
- Is protected from Cross-Site Request Forgeries
In the images directory, you will find
- A screenshot of the person listing page
- A screenshot of the person creation form
The screenshots show you what is expected from a structure standpoint. They
are meant to be guides. The tests will not make any assertions about the
styling of your pages, merely the structure of the pages and the data presented
on them.
Use the technologies you have used up to this point. They are all installed in
the package.json for your convenience.
- Express.js
- pg (the library to connect to PostgreSQL), Sequelize, and Sequelize CLI
- CSURF middleware
- Pug.js
- cookie-parser middleware
- body-parser middleware
- nodemon (for development purposes)
A package.json file already exists with the dependencies. Please run npm install
to install those before doing your development and running your tests.
Do not remove any dependencies already listed in the package.json.
Running the application
You can run your application in “dev” mode. The nodemon package is installed
and runnable from npm run dev
.
Running the tests
This is “black-box testing”. The tests will only use your Express application.
It will not make connections to the database or directly test your route
handlers. They will merely make HTTP requests of your Express app and analyze
the responses.
To ease your development, tests will run against your development database
and not the test database.
You will be responsible for creating, migrating, and seeding the data in
your development database.
Run your tests with npm test
. You can run npm test test/test-file-name.js
to run the tests for a specific part of the assessment.
- Example: To only run the test
01-form-page.js
do,
npm test test/01-form-page.js
If you get tired of seeing all of the Sequelize failures, you can try running:
npm test 2> /dev/null
That should redirect the annoying messages into oblivion but leave the
mocha output available for you to read. This may prevent you from seeing other
errors, though, so make sure to run it without the 2> /dev/null
if you're
trying to get a test to pass and need to see error output.
App Requirements
These are the requirements for the application. Follow them closely. The tests
will attempt to read data from your rendered HTML.
Read all of the requirements. Determine the data needed to include in your data
model.
Please use port 8081 for your Express.js server.
The database
Create a database user with CREATEDB
priveleges:
- The login username that you must use is “express_practice_app”
- The login password that you must use is “EzB5Dxo2dabnQBF8”
Initialize Sequelize in your assessment and use the following configuration in
your config/config.json
file:
{
"development": {
"username": "express_practice_app",
"password": "EzB5Dxo2dabnQBF8",
"database": "express_practice_development",
"host": "127.0.0.1",
"dialect": "postgres",
"seederStorage": "sequelize",
"logging": false
}
}
Remove logging: false
if you want to see SQL output in your terminal when
running tests with npm test
.
Create the development
database with those configurations.
For this assessment, the tests will be using your development
database
configuration defined in the config.json
file. The tests will not be
testing your database explicitly, but test specs DO rely on you setting up the
database AND database constraints properly.
You will need to generate and run the migrations, models, and seeders. There is
no need to run npm test
until after doing this.
The data model
You will need to store “People” data and “HairColor” data.
Generate a model (and migration) for the “HairColor” model with the attributes:
Attribute nameAttribute typeConstraintscolorstringunique, not nullable
- the “color” column will hold values up to 20 characters in length and
will not allowNULL
s
Generate a model (and migration) for the “Person” model with the attributes:
Attribute nameAttribute typeConstraintsfirstNamestringnot nullablelastNamestringnot nullableageintegerbiographytexthairColorIdintegernot nullable, references HairColors
Configure the migration so that:
- the “firstName” column will hold values up to 50 characters in length and
will not allowNULL
s - the “lastName” column will hold values up to 50 characters in length and
will not allowNULL
s - the “hairColorId” will not allow
NULL
s and references the "HairColors"
table
Create a seeder file for HairColors
:
{ color: "Auburn", createdAt: '2019-04-12', updatedAt: '2019-04-12'},
{ color: "Black", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
{ color: "Blonde", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
{ color: "Brown", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
{ color: "Other", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
{ color: "Red", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
{ color: "White", createdAt: '2019-04-12', updatedAt: '2019-04-12' },
If you set up the seeder correctly, the HairColors
table should have the
following pre-defined data in it:
colorAuburnBlackBlondeBrownOtherRedWhite
NOTE: All of the data constraints for this assessment can be handled by the
database with the allowNull
and unique
flags in your migrations. You do
not need to use form validations in this project. They are good to have, in
real applications, but can require too much time for you to integrate them into
this project. Again, you do not need to use a form validator, just use
database constraints and let the errors turn into 500 status codes by Express.
Make sure to make the appropriate associations.
After you’ve generated your models, migrations, and seeder files, don’t forget
to migrate and seed your database with the appropriate Sequelize CLI commands.
Your main file
You must use the app.js file to create and configure your Express
application. You must store the instance of your Express.js application in a
variable named “app”. That is what is exported at the bottom of the app.js
file.
Set up your CSRF middleware to use cookies.
The route “GET /new-person”
This page shows a form in which a visitor can add a new person. The form must
have
- a method of “post”
- an action of “/new-person”
In the form, you should have these inputs with the provided name:
Field HTML nameField typeConstraintsDefault valuesfirstNamesingle-line textrequiredlastNamesingle-line textrequiredagenumberbiographymulti-line texthairColorIddropdownrequiredOne of the pre-defined hair colors_csrfhiddenThe value provided by the CSURF middleware
You should also have a submit button.
Please refer to the screenshot below:
The route “POST /new-person”
The post handler should validate the data from the HTTP request. If everything
is fine, then it should create a new person and redirect to the route “/”.
Remember, all of the data constraints for this assessment can be handled by the
database with the allowNull
and unique
flags in your migrations. You do
not need to use form validations in this project. They are good to have, in
real applications, but can require too much time for you to integrate them into
this project. Again, you do not need to use a form validator, just use
database constraints and let the errors turn into 500 status codes by Express.
If the data does not pass validation, then no new record should be created. It
is ok to just let Express return an error code of 500 in this case. Note:
you would not do this in a real application.
The route “GET /”
When someone accesses your application, they should see a list of people that
are stored in your database. The list should contain:
- The person’s first name
- The person’s last name
- The person’s age
- A short biography
- Their hair color
Please refer to the screenshot below:
To create a table in a Pug.js template, you’ll use something like the following
code. You probably already know this, but it’s included for your reference.
table
thead
tr
th Header 1
th Header 2
tbody
each thing in things
tr
td= thing.property1
td= thing.property2
The tests will use a regular expression to determine if each piece of data is
wrapped with TD tags. For example, to test for the “Auburn”
value appearing in the HTML of the page, the tests would use the following
regular expression.
<td[^>]*>\s*Auburn\s*</td>
The regular expression will ignore any attributes that you put on the table data
tag as well as any white space around the entry for the data value.
Again, the styling is not important to the tests.
Exported from Medium on August 24, 2021.
Fetch Quick Sheet
Fetch
Fetch Quick Sheet
Fetch
fetch('/data.json')
.then(response => response.json())
.then(data => {
console.log(data)
})
.catch(err => ...)
Response
fetch('/data.json')
.then(res => {
res.text() // response body (=> Promise)
res.json() // parse via JSON (=> Promise)
res.status //=> 200
res.statusText //=> 'OK'
res.redirected //=> false
res.ok //=> true
res.url //=> 'http://site.com/data.json'
res.type //=> 'basic'
// ('cors' 'default' 'error'
// 'opaque' 'opaqueredirect')
res.headers.get('Content-Type')
})
Request options
fetch('/data.json', {
method: 'post',
body: new FormData(form), // post body
body: JSON.stringify(...),
headers: {
'Accept': 'application/json'
},
credentials: 'same-origin', // send cookies
credentials: 'include', // send cookies, even in CORS
})
Catching errors
fetch('/data.json')
.then(checkStatus)
function checkStatus (res) {
if (res.status >= 200 && res.status < 300) {
return res
} else {
let err = new Error(res.statusText)
err.response = res
throw err
}
}
Non-2xx responses are still successful requests. Use another function to turn them to errors.
Using with node.js
const fetch = require('isomorphic-fetch')
See: isomorphic-fetch (npmjs.com)
If you found this guide helpful feel free to checkout my github/gists where I host similar content:
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Or Checkout my personal Resource Site:
a/A-Student-Resources
Edit descriptiongoofy-euclid-1cd736.netlify.app
By Bryan Guner on March 5, 2021.
Exported from Medium on August 24, 2021.
Front End Behavioral Interview
Web Developer Job Interview Questions
Front End Behavioral Interview
### Web Developer Job Interview Questions
1. DESCRIBE A WEB DEVELOPMENT PROJECT YOU WORKED ON FROM START TO FINISH. WHAT APPROACH DID YOU TAKE, WHAT CHALLENGES DID YOU FACE, AND HOW WERE YOU SUCCESSFUL?
Tip: Be transparent about what a real web development project looks like for you. Highlight your wins, of course, but don’t shy away from being real about the challenges. Interviewers aren’t looking to hear that you never have setbacks (that’s not realistic). They want to hear how you get past setbacks and ultimately succeed.
2. DESCRIBE A PROJECT THAT WENT OFF THE RAILS. WHY DID IT GO WRONG AND HOW DID YOU REACT?
Tip: Similar to the last question, interviewers are looking for honesty here. Sometimes projects go badly, and that’s OK. What’s important is how you respond to failures and learn from them so they don’t happen next time.
3. WHICH PROGRAMMING LANGUAGES ARE YOU PROFICIENT IN? ARE THERE ANY LANGUAGES YOU’RE NOT FAMILIAR WITH THAT YOU FEEL LIKE YOU NEED TO LEARN?
Tip: This question is pretty straightforward — -let the interviewer know which languages you’re familiar with and how you use them. Ideally, you’ve scoped out the job beforehand and know that your experience syncs with what the employer needs. At the same time, have some new languages in mind that you’d like to learn. This highlights your willingness to keep growing professionally.
4. WHY ARE YOU DRAWN TO WEB DEVELOPMENT?
Tip: It’s a common pitfall to interview for a job and never explicitly say WHY you want to work in this specific field or for this particular employer/company. Even if the question doesn’t get asked, find a way to touch on it during the interview.
5. WHAT KIND OF TEAM ENVIRONMENT DO YOU THRIVE IN?
Tip: You may be tempted to say whatever you think the interviewer is looking for, but it’s way better to be honest. If the team you’ll be working with has a work style that’s completely outside of your comfort zone, then this particular job might not be a good fit for you. That being said, most development teams are dynamic and flexible, and if your employer knows what kind of environment suits you best, they can help find a spot on the team that WILL work for you.
6. HOW DO YOU KEEP ON TOP OF INDUSTRY NEWS AND TRENDS, AND HOW DO YOU APPLY THIS TO YOUR WORK?
Tip: You DO keep up with industry news, don’t you? If so, simply rattle off your list of favorite news sources and why they’re effective for keeping you in the know. And if tech news is something you’ve overlooked while being in the weeds of learning tech skills, take a few minutes to find a few suitable news blogs and tech Twitter accounts to put in your hip pocket (and be ready to bust them out at your next interview).
7. HOW DO YOU COMMUNICATE YOUR PROGRESS TO CLIENTS AND/OR STAKEHOLDERS?
Tip: The gist here is to demonstrate that you understand the importance of keeping clients and stakeholders up to date, and that you have ideas for establishing systems of communication (or that you’re willing to work with systems like Agile or Scrum if they’re used by your employer).
8. WHAT DO YOU DO IF A CLIENT OR STAKEHOLDER IS UNHAPPY WITH A PROJECT?
Tip: Having an effective communication strategy with stakeholders doesn’t mean you won’t sometimes receive negative feedback. So how do you respond? Do you get defensive? Shut down? Give up? Or do you find creative ways to accept that feedback and address client or shareholder concerns? Interviewers are looking for candidates who can adapt to and recover from hard times, so either think of a real example that you can share, or develop a client appeasement gameplan that you’ll use when the time comes.
9. GIVE ME AN EXAMPLE OF HOW YOU’D DESCRIBE WEB DEVELOPMENT (WHAT IT IS, WHY IT IS IMPORTANT) TO SOMEONE WHO IS COMPLETELY NEW TO TECH.
Tip: As a web developer you’ll find yourself in situations where you need to talk “tech” with non-techies. Making your work make sense to people who have no idea what it is you actually do is a valuable skill. Take off your developer hat for a day and think of some ways to describe web development to someone who doesn’t know Java from JavaScript.
10. GIVE AN EXAMPLE OF A WEBSITE OR WEB APPLICATION THAT YOU DON’T LIKE, POINT OUT WHAT’S WRONG WITH IT AND WHAT YOU WOULD CHANGE.
Tip: Interviewers may ask you to provide an example of a website you think is less than stellar, then ask you to describe what you think is lacking and what you’d do to improve it. It’s a good idea to have examples and explanations on hand (as well as examples of sites you think are super effective) going into an interview. Answering this question comprehensively shows interviewers that you aren’t signing on to mindlessly write code — -you understand what makes good sites good and how to make bad sites better.
11. WHAT KIND OF MANAGEMENT STYLE DO YOU RESPOND TO BEST?
Tip: This question is another one where you might be tempted to make the interviewer happy. But you know what’s guaranteed to make YOU unhappy? Working for a manager whose style you can’t stand. Be as flexible and as open minded as you can when describing your preferred management style, but if there’s something that’s a complete deal-breaker for you (or that you particularly appreciate), don’t be shy about making it known.
12. HOW WOULD YOU DESCRIBE THE ROLE OF A WEB DEVELOPER? WHAT ARE THE MOST IMPORTANT ASPECTS OF THE JOB AND WHY?
Tip: Your clear and concise summary of a web developer role shows how you think about the web development process in general, and lets interviewers know what specific developer strengths and interests you bring to the job.
13. HOW DO YOU MANAGE YOUR TIME DURING A DEVELOPMENT CYCLE, AND WHAT METHODS DO YOU USE FOR ESTIMATING HOW LONG SPECIFIC DEVELOPMENT TASKS WILL TAKE?
Tip: Managing your time and estimating how long individual tasks will take is critical to your success as a web developer. If you’re already good at time management and estimation, revisit what works for you and build on it ahead of this question. And if your current time management approach isn’t working? Now’s a great time to implement a new system and get the most out of your work day.
14. WHAT SOFT SKILLS WILL YOU BRING TO THE JOB, AND HOW DO YOU ENVISION USING THEM?
Tip: Soft skills can be a difference maker. If it’s a choice between a skilled programmer and a skilled programmer who can write well or who has experience with project management, most employers will pick the latter. So have a list of your own soft skills ready, but also have examples of how they’ll be relevant to a web developer job. It’s not enough to say you’re good at written and verbal communication. You need to explain how your excellent written and verbal communication skills will help you relay project details to team members and stakeholders.
15. GIVE AN EXAMPLE OF A NON-CODING/WEB DEVELOPMENT PROBLEM THAT YOU’VE SOLVED, AND WHAT YOUR PROBLEM SOLVING PROCESS INVOLVED.
Tip: Yes, you’re interviewing for a web developer job, but remember to look to other experiences in your life for inspiration. Examples like the time you helped improve the ordering system at the cafe you worked at or put together a volunteer fundraising effort to save the music program at your kids’ school all speak to the breadth of your problem solving abilities and experiences.
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
By Bryan Guner on August 6, 2021.
Exported from Medium on August 24, 2021.
Front End Interview Questions Part 2
These will focus more on vocabulary and concepts than the application driven approach in my last post!
CODEX
Front End Interview Questions Part 2
These will focus more on vocabulary and concepts than the application-driven approach in my last post!
Here’s part one for reference:
The Web Developer’s Technical Interview
Questions….Answers… and links to the missing pieces.bryanguner.medium.com
- If you were to describe semantic HTML to the next cohort of students, what would you say?
Semantic HTML is markup that conveys meaning, not appearance, to search engines to make everything easier to identify.
- Name two big differences between display: block; and display: inline;.
block starts on a new line and takes up the full width of the content.
inline starts on the same line as previous content, in line with other content, and takes up only the space needed for the content.
· What are the 4 areas of the box model?
content, padding, border, margin
· While using flexbox, what axis does the following property work on: align-items: center?
cross-axis
· Explain why git is valuable to a team of developers.
Allows you to dole out tiny pieces of a large project to many developers who can all work towards the same goal on their own chunks, allows roll back if you make a mistake; version control.
· What is the difference between an adaptive website and a fully responsive website?
An adaptive website “adapts” to fit a pre-determined set of screen sizes/devices, and a responsive one changes to fit all devices.
· Describe what it means to be mobile first vs desktop first.
It means you develop/code the site with mobile in mind first and work your way outward in screen size.
· What does font-size: 62.5% in the html tag do for us when using rem units?
This setting makes it so that 1 rem = 10 px for font size, easier to calculate.
· How would you describe preprocessing to someone new to CSS?
Preprocessing is basically a bunch of functions and variables you can use to store CSS settings in different ways that make it easier to code CSS.
· What is your favorite concept in preprocessing? What is the concept that gives you the most trouble?
Favorite is (parametric) mixins; but I don’t have a lot of trouble with preprocessing. What gives me the most trouble is knowing ahead of time what would be good to go in a mixin for a given site.
· Describe the biggest difference between .forEach & .map.
forEach iterates over an array item by item, and map calls a function on each array item, but returns another/additional array, unlike forEach.
· What is the difference between a function and a method?
Every function is an object. If a value is a function, it is a method. Methods have to be ‘received’ by something; functions do not.
· What is closure?
It is code identified elsewhere that we can use later; gives the ability to put functions together. If a variable isn’t defined, a function looks outward for context.
· Describe the four rules of the ‘this’ keyword.
Window/global binding — this is the window/console object. ‘use strict’; to prevent window binding.
Implicit binding — when a function is called by a dot, the object preceding the dot is the ‘this’. 80 percent of ‘this’ is from this type of binding.
New binding — points to new object created & returned by constructor function
Explicit binding — whenever call, bind, or apply are used.
· Why do we need super() in an extended class?
Super ties the parent to the child.
- What is the DOM?
Document object model, the ‘window’ or container that holds all the page’s elements
- What is an event?
An event is something happening on or to the page, like a mouse click, doubleclick, key up/down, pointer out of element/over element, things like this. There are tons of “events” that javascript can detect.
- What is an event listener?
Javascript command that ‘listens’ for an event to happen on the page to a given element and then runs a function when that event happens
- Why would we convert a NodeList into an Array?
A NodeList isn’t a real array, so it won’t have access to array methods such as slice or map.
- What is a component?
Reusable pieces of code to display info in a consistent repeatable way
· What is React JS and what problems does it try and solve? Support your answer with concepts introduced in class and from your personal research on the web.
ReactJS is a library used to build large applications. It’s very good at assisting developers in manipulating the DOM element to create rich user experiences. We need a way to off-load the state/data that our apps use, and React helps us do that.
· What does it mean to think in react?
It makes you think about how to organize/build apps a little differently because it’s very scalable and allows you to build huge apps. React’s one-way data flow makes everything modular and fast. You can build apps top-down or bottom-up.
· Describe state.
Some data we want to display.
· Describe props.
Props are like function arguments in JS and attributes in HTML.
· What are side effects, and how do you sync effects in a React component to state or prop changes?
Side effects are anything that affects something outside the executed function’s scope like fetching data from an API, a timer, or logging.
· Explain benefit(s) using client-side routing?
Answer: It’s much more efficient than the traditional way, because a lot of data isn’t being transmitted unnecessarily.
· Why would you use class component over function components (removing hooks from the question)?
Because some companies still use class components and don’t want to switch their millions of dollars’ worth of code over to all functional hooks, and also there’s currently a lot more troubleshooting content out there for classes that isn’t out there for hooks. Also, functional components are better when you don’t need state, presentational components.
· Name three lifecycle methods and their purposes.
componentDidMount = do the stuff inside this ‘function’ after the component mounted
componentDidUpdate = do the stuff inside this function after the component updated
componentWillUnmount = clean-up in death/unmounting phase
· What is the purpose of a custom hook?
allow you to apply non-visual behavior and stateful logic throughout your components by reusing the same hook over and over again
· Why is it important to test our apps?
Gets bugs fixed faster, reduces regression risk, makes you consider/work out the edge cases, acts as documentation, acts as safety net when refactoring, makes code more trustworthy
· What problem does the context API help solve?
You can store data in a context object instead of prop drilling.
· In your own words, describe actions, reducers and the store and their role in Redux. What does each piece do? Why is the store known as a ‘single source of truth’ in a redux application?
Everything that changes within your app is represented by a single JS object called the store. The store contains state for our application. When changes are made to our application state, we never write to our store object but rather clone the state object, modify the clone, and replace the original state with the new copy, never mutating the original object. Reducers are the only place we can update our state. Actions tell our reducers “how” to update the state, and perhaps with what data it should be updated, but only a reducer can actually update the state.
· What is the difference between Application state and Component state? When would be a good time to use one over the other?
App state is global, component state is local. Use component state when you have component-specific variables.
· Describe redux-thunk, what does it allow us to do? How does it change our action-creators?
Redux Thunk is middleware that provides the ability to handle asynchronous operations inside our Action Creators, because reducers are normally synchronous.
· What is your favorite state management system you’ve learned and this sprint? Please explain why!
Redux, because I felt it was easier to understand than the context API.
· Explain what a token is used for.
Many services out in the wild require the client (our React app, for example) to provide proof that it’s authenticated with them. The server running these services can issue a JWT (JSON Web Token) as the authentication token, in exchange for correct login credentials.
· What steps can you take in your web apps to keep your data secure?
As we build our web apps, we will most likely have some “protected” routes — routes that we only want to render if the user has logged in and been authenticated by our server. The way this normally works is we make a login request, sending the server the user’s username and password. The server will check those credentials against what is in the database, and if it can authenticate the user, it will return a token. Once we have this token, we can add two layers of protection to our app. One with protected routes, the other by sending an authentication header with our API calls (as we learned in the above objective).
· Describe how web servers work.
The “world wide web” (which we’ll refer to as “the web”) is just a part of the internet — which is itself a network of interconnected computers. The web is just one way to share data over the internet. It consists of a body of information stored on web servers, ready to be shared across the world. The term “web server” can mean two things:
· a computer that stores the code for a website
· a program that runs on such a computer
The physical computer device that we call a web server is connected to the internet, and stores the code for different websites to be shared across the world at all times. When we load the code for our websites, or web apps, on a server like this, we would say that the server is “hosting” our website/app.
· Which HTTP methods can be mapped to the CRUD acronym that we use when interfacing with APIs/Servers.
Create, Read, Update, Delete
· Mention two parts of Express that you learned about this week.
Routing/router, Middleware, convenience helpers
· Describe Middleware?
array of functions that get executed in the order they are introduced into the server code
· Describe a Resource?
o everything is a resource.
o each resource is accessible via a unique URI.
o resources can have multiple representations.
o communication is done over a stateless protocol (HTTP).
o management of resources is done via HTTP methods.
· What can the API return to help clients know if a request was successful?
200 status code/201 status code
· How can we partition our application into sub-applications?
By dividing the code up into multiple files and ‘requiring’ them in the main server file.
· Explain the difference between Relational Databases and SQL.
SQL is the language used to access a relational database.
· Why do tables need a primary key?
To uniquely identify each record/row.
· What is the name given to a table column that references the primary key on another table.
Foreign key
· What do we need in order to have a many to many relationship between two tables.
An intermediary table that holds foreign keys that reference the primary key on the related tables.
· What is the purpose of using sessions?
The purpose is to persist data across requests.
· What does bcrypt do to help us store passwords in a secure manner?
o password hashing function.
o implements salting both manual and automatically.
o accumulative hashing rounds.
· What does bcrypt do to slow down attackers?
Having an algorithm that hashes the information multiple times (rounds) means an attacker needs to have the hash, know the algorithm used, and how many rounds were used to generate the hash in the first place. So it basically makes it a lot more difficult to get somebody’s password.
· What are the three parts of the JSON Web Token?
Header, payload, signature
If you found this guide helpful feel free to checkout my GitHub/gists where I host similar content:
bgoonz’s gists
Instantly share code, notes, and snippets. Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python |…gist.github.com
bgoonz — Overview
Web Developer, Electrical Engineer JavaScript | CSS | Bootstrap | Python | React | Node.js | Express | Sequelize…github.com
Or Checkout my personal Resource Site:
Web-Dev-Hub
preview of the Web-Dev-Hubbest-celery-b2d7c.netlify.app
By Bryan Guner on March 19, 2021.
Exported from Medium on August 24, 2021.
Fundamental Concepts In Javascript
This is the stuff that comes up on interviews…
Fundamental Concepts In Javascript
This is the stuff that comes up on interviews…
Here are most of the below exercises!
- Label variables as either Primitive vs. Reference
- primitives: strings, booleans, numbers, null and undefined
- primitives are immutable
- refereces: objects (including arrays)
- references are mutable
- Identify when to use
.
vs[]
when accessing values of an object - dot syntax
object.key
- easier to read
- easier to write
- cannot use variables as keys
- keys cannot begin with a number
- bracket notation
object["key]
- allows variables as keys
- strings that start with numbers can be use as keys
- Write an object literal with a variable key using interpolation
put it in brackets to access the value of the variable, rather than just making the value that string
- Use the
obj[key] !== undefined
pattern to check if a given variable that contains a key exists in an object - can also use
(key in object)
syntax interchangeably (returns a boolean) - Utilize Object.keys and Object.values in a function
Object.keys(obj)
returns an array of all the keys inobj
Object.values(obj)
returns an array of the values inobj
Iterate through an object using a for in
loop
Define a function that utilizes ...rest
syntax to accept an arbitrary number of arguments
...rest
syntax will store all additional arguments in an array- array will be empty if there are no additional arguments
Use ...spread
syntax for Object literals and Array literals
- Destructure an array to reference specific elements
Write a function that accepts a array as an argument and returns an object representing the count of each character in the array
Callbacks Lesson Concepts
- Given multiple plausible reasons, identify why functions are called “First Class Objects” in JavaScript.
- they can be stored in variables, passed as arguments to other functions, and serve as return value for a function
- supports same basic operations as other types (strings, bools, numbers)
- higher-order functions take functions as arguments or return functions as values
- Given a code snippet containing an anonymous callback, a named callback, and multiple
console.log
s, predict what will be printed - what is this referring to?
- Write a function that takes in a value and two callbacks. The function should return the result of the callback that is greater.
Write a function, myMap, that takes in an array and a callback as arguments. The function should mimic the behavior of Array#map
.
Write a function, myFilter, that takes in an array and a callback as arguments. The function should mimic the behavior of Array#filter
.
Write a function, myEvery, that takes in an array and a callback as arguments. The function should mimic the behavior of Array#every
.
Scope Lesson Concepts
- Identify the difference between const
, let
, and var
declarations
> const
- cannot reassign variable, scoped to block
> let
- can reassign variable, scoped to block
> var
- outdated, may or may not be reassigned, scoped to function. can be not just reassigned, but also redeclared!
- a variable will always evaluate to the value it contains regardless of how it was declared
Explain the difference between const
, let
, and var
declarations
> var
is function scoped—so if you declare it anywhere in a function, the declaration (but not assignment…the fact that it exists is known to the javascript engine but the value assigned to it is a mystery until the code is run line by line!) is "hoisted" so it will exist in memory as “undefined” which is bad and unpredictable
> var
will also allow you to redeclare a variable, while let
or const
will raise a syntax error. you shouldn't be able to do that!
!!const
won't let you reassign a variable!!
> but if it points to a mutable object, you will still be able to change the value by mutating the object
- block-scoped variables allow new variables with the same name in new scopes
- block-scoped still performs hoisting of all variables within the block, but it doesn’t initialize to the value of
undefined
likevar
does, so it throws a specific reference error if you try to access the value before it has been declared - if you do not use
var
orlet
orconst
when initializing, it will be declared as global—THIS IS BAD (pretend that’s something you didn’t even know you could do) - if you assign a value without a declaration*(la la la la….I’m not listening)*, it exists in the global scope (so then it would be accessible by all outer scopes, so bad). however, there’s no hoisting, so it doesn’t exist in the scope until after the line is run.
Predict the evaluation of code that utilizes function scope, block scope, lexical scope, and scope chaining
- scope of a program means the set of variables that are available for use within the program
- global scope is represented by the
window
object in the browser and theglobal
object in Node.js - global variables are available everywhere, and so increase the risk of name collisions
local scope is the set of variables available for use within the function
- when we enter a function, we enter a new scope
- includes functions arguments, local variables declared inside function, and any variables that were already declared when the function is defined (hmm about that last one)
- for blocks (denoted by curly braces
{}
, as in conditionals orfor
loops), variables can be block scoped - inner scope does not have access to variables in the outer scope
- scope chaining — if a given variable is not found in immediate scope, javascript will search all accessible outer scopes until variable is found
- so an inner scope can access outer scope variables
- but an outer scope can never access inner scope variables
Define an arrow function
Given an arrow function, deduce the value of this
without executing the code
- arrow functions are automatically bound to the context they were declared in.
- unlike regular function which use the context they are invoked in (unless they have been bound using
Function#bind
). - if you implement an arrow function as a method in an object the context it will be bound to is NOT the object itself, but the global context.
- so you can’t use an arrow function to define a method directly
Implement a closure and explain how the closure effects scope
a closure is “the combination of a function and the lexical environment within which that function was declared”
- alternatively, “when an inner function uses or changes variables in an outer function”
- closures have access to any variables within their own scope + scope of outer functions + global scope
- the set of all these available variables is “lexical environemnt”
- closure keeps reference to all variables ** even if the outer function has returned
- Without a closure to access the variables of an outer function from within a call to an inner function the outer function ‘closed’ over …each function has a private mutable state that cannot be accessed externally
- The inner function will maintain a reference to the scope in which it was declared.so it has access to variables that were initialized in any outer scope- even if that scope
- The inner function will maintain a reference to the scope in which it was declared.so it has access to variables that were initialized in any outer scope- even if that scope
Q:
if a variable exists in the scope of what could have been accessed by a function(e.g.global scope, outer function, etc), does that variable wind up in the closure even if it never got accessed ?
A:
if you change the value of a variable(e.g.i++) you will change the value of that variable in the scope that it was declared in
Define a method that references this
on an object literal
- when we use
this
in a method it refers to the object that the method is invoked on - it will let you access other pieces of information from within that object, or even other methods
- method style invocation —
object.method(args)
(e.g. built in examples likeArray#push
, orString#toUpperCase
) - context is set every time we invoke a function
- function style invocation sets the context to the global object no matter what
- being inside an object does not make the context that object! you still have to use method-style invocation
- Utilize the built in
Function#bind
on a callback to maintain the context of this - when we call bind on a function, we get an exotic function back — so the context will always be the same for that new function
can also work with arguments, so you can have a version of a function with particular arguments and a particular context.the first arg will be the context aka the `this` you want it to use.the next arguments will be the functions arguments that you are binding — if you just want to bind it to those arguments in particular, you can use `null` as the first argument, so the context won ‘t be bound, just the arguments — Given a code snippet, identify what `this` refers to
> Important to recognize the difference between scope and context
- scope works like a dictionary that has all the variables that are available within a given block, plus a pointer back the next outer scope(which itself has pointers to new scopes until you reach the global scope.so you can think about a whole given block ‘s scope as a kind of linked list of dictionaries) (also, this is not to say that scope is actually implemented in this way, that is just the schema that i can use to understand it)
- context refers to the value of the `this` keyword
- the keyword `this` exists in every function and it evaluates to the object that is currently invoking that function
- so the context is fairly straightforward when we talk about methods being called on specific objects
- you could, however, call an object ‘s method on something other than that object, and then this would refer to the context where/how it was called, e.g.
CALLING SOMETHING IN THE WRONG CONTEXT CAN MESS YOU UP!
- could throw an error if it expects this to have some other method or whatever that doesn’t exist
- you could also overwrite values or assign values to exist in a space where they should not exist
- if you call a function as a callback, it will set
this
to be the outer function itself, even if the function you were calling is a method that was called on a particular object
we can use strict mode with "use strict";
this will prevent you from accessing the global object with this
in functions, so if you try to call this
in the global context and change a value, you will get a type error, and the things you try to access will be undefined
- CALLING SOMETHING IN THE WRONG CONTEXT CAN MESS YOU UP!
- could throw an error if it expects this to have some other method or whatever that doesn’t exist
- you could also overwrite values or assign values to exist in a space where they should not exist
- if you call a function as a callback, it will set
this
to be the outer function itself, even if the function you were calling is a method that was called on a particular object
> we can use strict mode with "use strict";
this will prevent you from accessing the global object with this
in functions, so if you try to call this
in the global context and change a value, you will get a type error, and the things you try to access will be undefined
POJOs
POJOs
1. Label variables as either Primitive vs. Reference
Javascript considers most data types to be ‘primitive’, these data types are immutable, and are passed by value. The more complex data types: Array and Object are mutable, are considered ‘reference’ data types, and are passed by reference.
- Boolean — Primitive
- Null — Primitive
- Undefined — Primitive
- Number — Primitive
- String — Primitive
- Array — Reference
- Object — Reference
- Function — Reference
2. Identify when to use . vs [] when accessing values of an object
3. Write an object literal with a variable key using interpolation
4. Use the obj[key] !== undefined pattern to check if a given variable that contains a key exists in an object
5. Utilize Object.keys and Object.values in a function
6. Iterate through an object using a for in loop
7. Define a function that utilizes …rest syntax to accept an arbitrary number of arguments
8. Use …spread syntax for Object literals and Array literals
9. Destructure an array to reference specific elements
10. Destructure an object to reference specific values
11. Write a function that accepts a string as an argument and returns an object representing the count of each character in the array
Review of Concepts
1. Identify the difference between const, let, and var declarations
2. Explain the difference between const, let, and var declarations
var a = "a";
var
is the historical keyword used for variable declaration.var
declares variables in function scope, or global scope if not inside a function.-
We consider
var
to be deprecated and it is never used in this course.let b = "b";
let
is the keyword we use most often for variable declaration.let
declares variables in block scope.-
variables declared with
let
are re-assignable.const c = "c";
const
is a specialized form oflet
that can only be used to initialize a variable.Except when it is declared, you cannot assign to a
const
variable.const
scopes variables the same way thatlet
does.
3. Predict the evaluation of code that utilizes function scope, block scope, lexical scope, and scope chaining
Consider this run
function, inside which foo
and bar
have function scope
. i
and baz
are scoped to the block expression.
Notice that referencing baz
from outside it's block results in JavaScript throwing a ReferenceError.
Consider this run
function, inside of which foo
has function scope
.
6. Implement a closure and explain how the closure effects scope
4. Define an arrow function
const returnValue = (val) => val;
This simple construct will create a function that accepts val
as a parameter, and returns val
immediately. We do not need to type return val
, because this is a single-line function.
Identically, we could write
const returnValue = (val) => {
return val;
};
5. Given an arrow function, deduce the value of this
without executing the code
If we use a function declaration style function, the this
variable is set to the global
object (i.e. Object [global]
in Node. JS and Window
in your browser).
const adder = (arr) => {
console.log(this);
arr.reduce((acc, ele) => sum += ele);
};
adder([1, 2, 4, 6]);
In this example, we use a fat arrow style function. Note that when we declare a functions like this this
becomes
7. Define a method that references this on an object literal
8. Utilize the built in Function#bind on a callback to maintain the context of this
9. Given a code snippet, identify what this
refers to
Web-Dev-Hub
Memoization, Tabulation, and Sorting Algorithms by Example Why is looking at runtime not a reliable method of…bgoonz-blog.netlify.app
By Bryan Guner on August 11, 2021.
Exported from Medium on August 24, 2021.
Fundamental Concepts In React That Will Probably Come Up On An Interview
Incomplete Article
Fundamental Concepts In React That Will Probably Come Up On An Interview
Incomplete Article
A list of all of my articles to link to future posts
You should probably skip this one… seriously it’s just for internal use!bryanguner.medium.com
Explain how React uses a tree data structure called the “virtual DOM” to model the DOM
↠The Virtual DOM is an in-memory tree representation of the browser’s Document Object Model. React’s philosophy is to interact with the Virtual DOM instead of the regular DOM for developer ease and performance.
↠By abstracting the key concepts from the DOM, React is able to expose additional tooling and functionality increasing developer ease.
↠By trading off the additional memory requirements of the Virtual DOM, React is able to optimize for efficient subtree comparisons, resulting in fewer, simpler updates to the less efficient DOM. The result of these tradeoffs is improved performance.
Describe how JSX transforms into React.createElement calls:
↠JSX is a special format to let you construct virtual DOM nodes using familiar HTML-like syntax. You can put the JSX directly into your .js files, however you must run the JSX through a pre-compiler like Babel in order for the browser to understand it.
↠ReactDOM.render is a simple function which accepts 2 arguments: what to render and where to render it:
Describe how JSX transforms into React.createElement calls:
↠JSX is a special format to let you construct virtual DOM nodes using familiar HTML-like syntax.
↠You can put the JSX directly into your .js files, however you must run the JSX through a pre-compiler like Babel in order for the browser to understand it.
Here we initialize a Clock component using JSX instead of React.createElement .
Using Babel this code is compiled to a series of recursively nested createElement calls:
TBC…
By Bryan Guner on June 29, 2021.
Exported from Medium on August 24, 2021.
Top comments (1)
I think this is the longest post on DEV. Great Job!