Pyodbc SQL CRUD – Read: Examples with MySQL

Reading the data in a database table is a fundamental SQL operation. I suppose you could just store the data and be done with it. But, where is the fun in that? To see stored data, you use the SELECT command. In the CRUD acronym, that is the Read aspect. I have written extensively on SQL CRUD operations in the past (see list of posts at the end of this piece). But, how do you read rows of data with pyodbc as the middleware (termed loosely here) between you and the database? Keep reading to see several simple examples…

monitor-with-programming-code
Photo by Shahadat Rahman on Unsplash

OS, Software, and DB used:

  • OpenSuse Leap 15.1
  • MySQL 8.0.18
  • pyodbc 4.0.27


Self-Promotion:

If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!


Pyodbc CRUD Series

Visit other posts that are part of the Pyodbc CRUD series:

Below is the setup connection information I use throughout this post:

1
2
3
4
5
6
In [1]: import pyodbc

In [2]: conn = pyodbc.connect("""DRIVER={MySQL ODBC 8.0 Unicode Driver};
   ...:     SERVER=localhost;DATBASE=walking;USER=j2112o;PASSWORD=my_password;"""
)                                                

In [3]: cur = conn.cursor()

I have a ‘walking’ database where I store particular stats from all of my many walks: In [4]: cur.execute(“USE walking”) Out[4]: In [5]: show_tables = cur.tables() In [6]: for row in show_tables: …: print(row) …: (‘walking’, ”, ‘shoes_worn’, ‘TABLE’, ”) (‘walking’, ”, ‘walking_stats’, ‘TABLE’, ”) [/ccen]

As shown with the pyodbc tables() method, there are 2 tables: ‘shoes_worn’ and ‘walking_stats’. I’ll mainly target the ‘walking_stats’ table for reading data and here are its columns:

1
2
3
4
5
6
7
8
9
10
11
In [7]: stats_meta = cur.columns(table='walking_stats')                                                                                                        

In [8]: for row in stats_meta:
   ...:     print(row)
   ...:                                                                                                                                                        
('', None, 'walking_stats', 'day_walked', 91, 'date', 10, 6, None, None, 1, '', None, 9, 91, None, 1, 'YES')
('', None, 'walking_stats', 'cal_burned', 3, 'decimal', 4, 6, 1, 10, 1, '', None, 3, None, None, 2, 'YES')
('', None, 'walking_stats', 'miles_walked', 3, 'decimal', 4, 6, 2, 10, 1, '', None, 3, None, None, 3, 'YES')
('', None, 'walking_stats', 'duration', 92, 'time', 8, 6, 0, 10, 1, '', None, 9, 92, None, 4, 'YES')
('', None, 'walking_stats', 'mph', 3, 'decimal', 2, 4, 1, 10, 1, '', None, 3, None, None, 5, 'YES')
('', None, 'walking_stats', 'shoe_id', 4, 'integer', 10, 4, 0, 10, 1, '', None, 4, None, None, 6, 'YES')

Pyodbc has several handy meta-data methods, enabling you to query for schema information about databases and tables. Want to know more about them? Then you should definitely read, Pyodbc meta-data methods you should be using – interactions with MySQL.

Reading Data… Filtering at the MySQL query level…

I’ll query the ‘walking_stats’ table, storing the results in a variable for those walks I made in November. Notice I restrict the number of returned rows using the WHERE clause. In this example, I am filtering at what I call the MySQL level. This will make more sense when we reach the section in which this is handled at the pyodbc level:

1
In [9]: sql_query = "SELECT * FROM walking_stats WHERE MONTHNAME(day_walked) = 'November';"

Calling the pyodbc cursor execute() method and passing it the ‘sql_query’ variable along with chaining the fetchall() method, retrieves the query results:

1
In [10]: stats = cur.execute(sql_query).fetchall()

In assigning the cursor results to the ‘stats’ object – which is a cursor in and of itself – we can iterate through the results with a for loop. Note that we also have access to the column names themselves using dot (.) notation.

With the below code, I’ll retrieve just the ‘day_walked’ and ‘cal_burned’ columns:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
In [11]: for row in stats:
    ...:     print(row.day_walked, row.cal_burned)
    ...:                                                                                                                                                      
2019-11-01 367.8
2019-11-03 365.6
2019-11-04 367.9
2019-11-06 357.8
2019-11-08 354.8
2019-11-10 364.9
2019-11-12 357.7
2019-11-13 364.4
2019-11-14 364.2
2019-11-15 354.6
2019-11-17 356.6
2019-11-18 366.7
2019-11-19 369.9
2019-11-21 369.7
2019-11-24 354.1
2019-11-25 352.1
2019-11-27 356.6
2019-11-29 347.0

Suppose I have the requirement to retrieve rows in which the ‘cal_burned’ value is less than 357. Using the ‘row’ for loop variable (any legal variable naming will work also btw), I can do just that. Instead of sending off another query to the server, I can filter with a boolean predicate test in a similar fashion I would use in SQL, but at the pyodbc level, on the current ‘stats’ cursor object. Having access to the table column names is vital in filtering against them:

1
2
3
4
5
6
7
8
9
10
11
In [12]: for row in stats:
    ...:     if row.cal_burned < 357:
    ...:         print(row.cal_burned)
    ...:                                                                                                                                                      
354.8
354.6
356.6
354.1
352.1
356.6
347.0

Below is the equivalent SQL query executed in the MySQL Shell (SQL mode), producing the same results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 MySQL  localhost:33060+ ssl  walking  SQL > SELECT cal_burned
                                          -> FROM walking_stats
                                          -> WHERE MONTHNAME(day_walked) = 'November'
                                          -> AND
                                          -> cal_burned < 357;
+------------+
| cal_burned |
+------------+
|      354.8 |
|      354.6 |
|      356.6 |
|      354.1 |
|      352.1 |
|      356.6 |
|      347.0 |
+------------+
7 rows in set (0.1721 sec)

Reading Data… Filtering at the pyodbc level…

Instead of filtering the rows in the SQL WHERE clause, we can use the python date datatype with the month attribute, and retrieve rows for November just as we did in the first example.

First, I’ll retrieve all the rows from the table into a variable, also importing the datetime module as we will need it for the predicate test:

1
2
3
4
5
In [8]: all_rows = """SELECT * FROM walking_stats;"""                                                                                                        

In [9]: more_rows = cur.execute(all_rows).fetchall()

In [10]: from datetime import date

Using the date month attribute, I filter for any row that has a month value of 11 (November’s month number):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
In [24]: for row in more_rows:
    ...:     if row.day_walked.month == 11:
    ...:         print(row)
    ...:          
    ...:          
    ...:                                                                                                                                                      
(datetime.date(2019, 11, 1), Decimal('367.8'), Decimal('3.66'), datetime.time(1, 7, 37), Decimal('3.2'), 6)
(datetime.date(2019, 11, 3), Decimal('365.6'), Decimal('3.66'), datetime.time(1, 7, 13), Decimal('3.3'), 6)
(datetime.date(2019, 11, 4), Decimal('367.9'), Decimal('3.49'), datetime.time(1, 7, 39), Decimal('3.1'), 6)
(datetime.date(2019, 11, 6), Decimal('357.8'), Decimal('3.50'), datetime.time(1, 5, 47), Decimal('3.2'), 6)
(datetime.date(2019, 11, 8), Decimal('354.8'), Decimal('3.49'), datetime.time(1, 5, 6), Decimal('3.2'), 6)
(datetime.date(2019, 11, 10), Decimal('364.9'), Decimal('3.64'), datetime.time(1, 7, 6), Decimal('3.3'), 6)
(datetime.date(2019, 11, 12), Decimal('357.7'), Decimal('3.59'), datetime.time(1, 5, 46), Decimal('3.3'), 6)
(datetime.date(2019, 11, 13), Decimal('364.4'), Decimal('3.65'), datetime.time(1, 7), Decimal('3.3'), 6)
(datetime.date(2019, 11, 14), Decimal('364.2'), Decimal('3.67'), datetime.time(1, 6, 57), Decimal('3.3'), 6)
(datetime.date(2019, 11, 15), Decimal('354.6'), Decimal('3.53'), datetime.time(1, 5, 12), Decimal('3.2'), 6)
(datetime.date(2019, 11, 17), Decimal('356.6'), Decimal('3.56'), datetime.time(1, 5, 34), Decimal('3.3'), 6)
(datetime.date(2019, 11, 18), Decimal('366.7'), Decimal('3.48'), datetime.time(1, 7, 25), Decimal('3.1'), 6)
(datetime.date(2019, 11, 19), Decimal('369.9'), Decimal('3.65'), datetime.time(1, 8), Decimal('3.2'), 5)
(datetime.date(2019, 11, 21), Decimal('369.7'), Decimal('3.72'), datetime.time(1, 7, 59), Decimal('3.3'), 5)
(datetime.date(2019, 11, 24), Decimal('354.1'), Decimal('3.28'), datetime.time(1, 5, 6), Decimal('3.0'), 5)
(datetime.date(2019, 11, 25), Decimal('352.1'), Decimal('3.38'), datetime.time(1, 4, 44), Decimal('3.1'), 5)
(datetime.date(2019, 11, 27), Decimal('356.6'), Decimal('3.43'), datetime.time(1, 5, 33), Decimal('3.1'), 5)
(datetime.date(2019, 11, 29), Decimal('347.0'), Decimal('3.36'), datetime.time(1, 3, 48), Decimal('3.2'), 5)

All the rows for November have been returned.

Now in one statement, I’ll filter rows for the month of November and those with a ‘cal_burned’ value less than 357:

1
2
3
4
5
6
7
8
9
10
11
In [25]: for row in more_rows:
    ...:     if row.day_walked.month == 11 and row.cal_burned < 357:
    ...:         print(row.day_walked, row.cal_burned)
    ...:                                                                                                                                                      
2019-11-08 354.8
2019-11-15 354.6
2019-11-17 356.6
2019-11-24 354.1
2019-11-25 352.1
2019-11-27 356.6
2019-11-29 347.0

In this example, all the filtering takes place at the pyodbc level.


SQL CRUD Basics

Are you curious about where all this SQL CRUD stems from? Read this series I wrote on SQL CRUD Basics:

You are progressing nicely! You know how to use pyodbc to create new rows of data and read them, however, there is more to come. Updating data is another fundamental SQL operation and you can modify stored data with a pyodbc connection as well. Stay tuned for the next part in the series in which I explore UPDATE functionality using pyodbc. Thanks for reading!

Like what you have read? See anything incorrect? Please comment below and thanks for reading!!!

A Call To Action!

Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.

Visit the Portfolio-Projects page to see blog post/technical writing I have completed for clients.

Have I mentioned how much I love a cup of coffee?!?!

To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)

Be sure and visit the “Best Of” page for a collection of my best blog posts.


Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.

Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, is performed on a personal development/learning workstation-environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.

Hey thanks for commenting! Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.