SQL CRUD Basics Part 3 – Update.

Some data may never change. Yet, other data will change. In SQL, you modify existing rows of data with the UPDATE command. UPDATE is a powerful command as it can potentially change multiple rows of data in a single execution – for better or worse. UPDATE is categorized as a DML command which means: Data Manipulation Language. Let’s learn how to use this integral command with examples…

[Keep reading for more SQL database and Python-centric content >>>]
Advertisements

SQL CRUD Basics Part 2 – Read

In SQL CRUD Basics Part 1 – Create, we learned how to create new rows of data in a database table with the INSERT statement. In this post, we are going to visit the busiest statement in SQL – SELECT. If you want to view or read the stored data in a table, you use SELECT.

[Head this way for great SQL and Python-centric blogging >>>]

SQL CRUD Basics: Part 1 – Create

In Introduction to SQL CRUD Basics, I listed out the 4 elements of CRUD. Create is the first and the subject of this post. Create relates to the SQL INSERT statement, which is used to introduce new rows of data into a database table. Continue reading to learn basic usage of this first CRUD element.

[Head this way for great SQL and Python-centric blogging >>>]

Introduction to SQL CRUD Basics.

Are you familiar with CRUD operations? If not, are you interested in what they are? Moreover, what they are used for? Confused? In this particular sense of the word, CRUD applies to a specific set of operations in data storage. This post is an introduction to a planned series of posts, in which I’ll focus on the SQL database aspect of its meaning. Actually, CRUD is an acronym that stands for: Create, Read, Update, Delete.

Why is CRUD important for those interested in learning SQL? Each individual letter part of the acronym stands for a word that is in fact, a basic – and important – SQL operation.

[Keep reading for more SQL database and Python-centric content >>>]

Window Functions in PostgreSQL – example with 3-day rolling average.

After reading this fantastic post, Window Functions in Python and SQL, I decided to apply a similar function to a data set that interests me: the walking/hiking stats I keep up with for all of my (daily) walks. While this blog post will cover more of the SQL aspect, I plan to write one covering the Python and Pandas portion in the near future…

OS, Database, and software used:
  • Xubuntu Linux 18.04.2 LTS (Bionic Beaver)
  • PostgreSQL 11.4


Self-Promotion:

If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!


I use this table and structure to store and track walking stats data. I have written several blog posts detailing different methods using PostgreSQL and pandas, for bulk loading CSV data in it. Be sure and visit those linked posts in the closing section below if you are interested.

1
2
3
4
5
6
7
8
9
10
walking_stats=> \d stats;
                          Table "public.stats"
    Column    |          Type          | Collation | Nullable | Default
--------------+------------------------+-----------+----------+---------
 day_walked   | date                   |           |          |
 cal_burned   | numeric(4,1)           |           |          |
 miles_walked | numeric(4,2)           |           |          |
 duration     | time without time zone |           |          |
 mph          | numeric(2,1)           |           |          |
 shoe_id      | integer                |           |          |

Using a Window Function, we can retrieve query results for a 3 day rolling average of calories burned:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
SELECT day_walked,
cal_burned,
AVG(cal_burned) OVER(ORDER BY day_walked ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) AS three_day_avg
FROM stats
WHERE EXTRACT(MONTH FROM day_walked) = 1;
 day_walked | cal_burned |    three_day_avg    
------------+------------+----------------------
 2019-01-01 |      132.8 | 132.8000000000000000
 2019-01-02 |      181.1 | 156.9500000000000000
 2019-01-07 |      207.3 | 173.7333333333333333
 2019-01-08 |      218.2 | 202.2000000000000000
 2019-01-09 |      193.0 | 206.1666666666666667
 2019-01-10 |      160.2 | 190.4666666666666667
 2019-01-11 |      206.3 | 186.5000000000000000
 2019-01-13 |      253.2 | 206.5666666666666667
 2019-01-14 |      177.6 | 212.3666666666666667
 2019-01-15 |      207.0 | 212.6000000000000000
 2019-01-16 |      248.7 | 211.1000000000000000
 2019-01-17 |      176.3 | 210.6666666666666667
 2019-01-19 |      200.2 | 208.4000000000000000
 2019-01-20 |      244.4 | 206.9666666666666667
 2019-01-21 |      205.9 | 216.8333333333333333
 2019-01-22 |      244.8 | 231.7000000000000000
 2019-01-23 |      231.8 | 227.5000000000000000
 2019-01-25 |      244.9 | 240.5000000000000000
 2019-01-27 |      302.7 | 259.8000000000000000
 2019-01-28 |      170.2 | 239.2666666666666667
 2019-01-29 |      235.5 | 236.1333333333333333
 2019-01-30 |      254.2 | 219.9666666666666667
 2019-01-31 |      229.5 | 239.7333333333333333
(23 rows)

To clean up all the extra digits in the ‘three_day_avg’ column, we can wrap the Window Function in the ROUND() function, keeping only 2 digits after the decimal:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
walking_stats=> SELECT day_walked, cal_burned, ROUND(AVG(cal_burned) OVER(ORDER BY day_walked ROWS BETWEEN 2 PRECEDING AND CURRENT ROW),2) AS three_day_avg
FROM stats
WHERE EXTRACT(MONTH FROM day_walked) = 1;
 day_walked | cal_burned | three_day_avg
------------+------------+---------------
 2019-01-01 |      132.8 |        132.80
 2019-01-02 |      181.1 |        156.95
 2019-01-07 |      207.3 |        173.73
 2019-01-08 |      218.2 |        202.20
 2019-01-09 |      193.0 |        206.17
 2019-01-10 |      160.2 |        190.47
 2019-01-11 |      206.3 |        186.50
 2019-01-13 |      253.2 |        206.57
 2019-01-14 |      177.6 |        212.37
 2019-01-15 |      207.0 |        212.60
 2019-01-16 |      248.7 |        211.10
 2019-01-17 |      176.3 |        210.67
 2019-01-19 |      200.2 |        208.40
 2019-01-20 |      244.4 |        206.97
 2019-01-21 |      205.9 |        216.83
 2019-01-22 |      244.8 |        231.70
 2019-01-23 |      231.8 |        227.50
 2019-01-25 |      244.9 |        240.50
 2019-01-27 |      302.7 |        259.80
 2019-01-28 |      170.2 |        239.27
 2019-01-29 |      235.5 |        236.13
 2019-01-30 |      254.2 |        219.97
 2019-01-31 |      229.5 |        239.73
(23 rows)

And there it is, a 3-day rolling average of calories burned for the month of ‘January’.

We can use SQL to easily check the math for a particular row. I’ll focus on row 3, dated ‘2019-01-07’. Since the WINDOWING portion of the OVER() clause, ROWS BETWEEN 2 PRECEDING AND CURRENT ROW essentially means: Include (Average) the ‘cal_burned’ values for the current row and those 2 rows above – or the PRECEDING 2 rows- the math would look like this:

1
2
3
4
5
walking_stats=> SELECT ROUND((207.3 + 181.1 + 132.8) / 3,2) AS three_day_avg;
 three_day_avg
---------------
        173.73
(1 row)

I have written several blog posts about Window Functions within both the PostgreSQL and MySQL ecosystems, however, the 2 below are most similar to this post and provide more information concerning the windowing portion of the OVER() clause:

Other posts you may be interested in: Bulk CSV Uploads with Pandas and PostgreSQL

Try out Window Functions yourself to calculate rolling averages, sums, and the like on data sets that interest you. Hit me up in the comments with some examples. I’d love to know of more interesting use cases. Thanks for reading!

Like what you have read? See anything incorrect? Please comment below and thanks for reading!!!

A Call To Action!

Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.

Visit the Portfolio-Projects page to see blog post/technical writing I have completed for clients.

Have I mentioned how much I love a cup of coffee?!?!

To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)

Be sure and visit the “Best Of” page for a collection of my best blog posts.


Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.

Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, is performed on a personal development/learning workstation-environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.