In my day job as a Pipeline Survey Data Analyst, I lean heavily on the Python pyodbc package for interactions with an Access Database. To gain more knowledge and better proficiency with pyodbc, I thought to use it with one of my favorite open-source databases, MySQL. Having recently written a 4-part series on SQL CRUD operations using MySQL, this post is a continuation of sorts. However, Create operations are executed with the pyodbc driver instead of native SQL…

OS, Software, and DB used:
- OpenSuse Leap 15.1
- MySQL 8.0.18
- pyodbc 4.0.27
Self-Promotion:
If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!
As I have discussed in a previous post, the first letter of the CRUD acronym stands for CREATE
. We also know that to introduce new rows of data into a database table, we use the INSERT
statement. Feel free to read the first post of my SQL CRUD Basics series, SQL CRUD Basics Part 1 – Create, which is what inspired me to write this post.
Assume for the duration of this post, I am using the below database connection handlers:
1 2 3 4 5 6 | import pyodbc conn = pyodbc.connect("""DRIVER={MySQL ODBC 8.0 Unicode Driver}; SERVER=localhost;DATBASE=walking;USER=j2112o;PASSWORD=my_password;""") cur = conn.cursor() cur.execute("USE walking;") |
Here is the description of the target table I’ll be INSERT
ing data into:
1 2 3 4 5 6 7 8 9 10 11 12 | mysql> DESC walking_stats; +--------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+-------+ | day_walked | date | YES | | NULL | | | cal_burned | decimal(4,1) | YES | | NULL | | | miles_walked | decimal(4,2) | YES | | NULL | | | duration | time | YES | | NULL | | | mph | decimal(2,1) | YES | | NULL | | | shoe_id | int(11) | YES | | NULL | | +--------------+--------------+------+-----+---------+-------+ 6 rows in set (0.43 sec) |
Single row inserts with execute()…
Since we are INSERT
‘ing data, it is a good idea to do the work with autocommit
disabled. By default, autocommit
is set to True
. We might not want that. With multiple INSERT
‘s, if something goes wrong with one of them, it would be best to have nothing committed versus skewed or inconsistent data because some rows were loaded while others were not. Tracking those that did get loaded versus those that did not could potentially turn into a headache that we don’t want.
With the conn
connection object, you can easily turn autocommit
off by setting it to False
(this can be handled when you establish your initial connection object as well):
1 | In [5]: conn.autocommit = False |
I have this tuple
of data that represents a days’ worth of walking stats. Let’s INSERT
it into the target table using pyodbc’s execute()
method:
1 2 3 4 | In [9]: walk_day = ('2019-11-01',367.8,3.66,'01:07:37',3.2,6) In [12]: cur.execute("""INSERT INTO walking_stats(day_walked,cal_burned,miles_walked,duration,mph,shoe_id) VALUES(?,?,?,?,?,?)""", walk_day) Out[12]: <pyodbc.cursor at="" 0x7fd0e0095dc8=""> |
Note: In order to reduce the risk of SQL Injection, ?
placeholders are used for the column values in the MySQL VALUES
clause.
Pyodbc cursor
objects have a useful rowcount
attribute that holds the number of rows affected by the previous SQL command. According to my understanding, typically SELECT
queries will produce a value of -1 for this attribute and any useful values are likely obtained for DML command such as INSERT
and UPDATE
.
Let’s find out its current value based off of the above INSERT
just executed:
1 2 | In [13]: print(cur.rowcount) 1 |
So a value of 1 should indicate 1 affected row.
Prior to the previous INSERT
, I did not have any of November’s walking data in table ‘walking_stats’. This query will return the rows in which the month value of the ‘day_walked’ column is November:
1 2 3 4 | mysql> SELECT * -> FROM walking_stats -> WHERE MONTHNAME(day_walked) = 'November'; Empty set (0.00 sec) |
No results…
What gives?
Recall that I turned autocommit
off prior to executing the query. Therefore, I have to explicitly commit the changes by calling the commit()
method on the cur
cursor
object:
1 | In [14]: cur.commit() |
Now I should see that row of data in the ‘walking_stats’ table:
1 2 3 4 5 6 7 8 9 | mysql> SELECT * -> FROM walking_stats -> WHERE MONTHNAME(day_walked) = 'November'; +------------+------------+--------------+----------+------+---------+ | day_walked | cal_burned | miles_walked | duration | mph | shoe_id | +------------+------------+--------------+----------+------+---------+ | 2019-11-01 | 367.8 | 3.66 | 01:07:37 | 3.2 | 6 | +------------+------------+--------------+----------+------+---------+ 1 row in set (0.00 sec) |
As I will demonstrate, a practical work-flow (borrowed heavily from the online documentation) I use in my day job – with success – is implemented as below:
1 2 3 4 5 6 7 8 9 10 | try: con.autocommit = False cur.execute(INSERT STATEMENT) except pydoc.DatabaseError as e: raise e cur.rollback() else: cur.commit() finally: con.autocommit = True |
By using a try/except/else/finally
block, we have some assurance that pretty much any scenario is handled should issues arise while performing DML operations. Since autocommit
is set to False
, either execute()
runs without a hitch and commits in the else
block. Or, the cursor
is rolled back in the exception
block. Also, no matter which occurs autocommit
is set back to True
in the Finally
block.
Multi-row inserts with executemany()…
Since I keep up with the walking metrics that interest me in a CSV file, I will use it as the data source for multiple INSERTS
using the pyodbc cursor
executemany()
method. With the csv
module, I’ll read in the rows of data using the csv.reader()
function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | In [4]: with open(r'/home/joshua/Practice_Data/Fitness_DB_Data/nov_2019_hiking_stats.csv', 'r') as csv_file: ...: csv_reader = csv.reader(csv_file, delimiter=',') ...: for row in csv_reader: ...: print(row) ...: ['2019-11-03', '365.6', '3.66', '01:07:13', '3.3', '6'] ['2019-11-04', '367.9', '3.49', '01:07:39', '3.1', '6'] ['2019-11-06', '357.8', '3.50', '01:05:47', '3.2', '6'] ['2019-11-08', '354.8', '3.49', '01:05:06', '3.2', '6'] ['2019-11-10', '364.9', '3.64', '01:07:06', '3.3', '6'] ['2019-11-12', '357.7', '3.59', '01:05:46', '3.3', '6'] ['2019-11-13', '364.4', '3.65', '01:07:00', '3.3', '6'] ['2019-11-14', '364.2', '3.67', '01:06:57', '3.3', '6'] ['2019-11-15', '354.6', '3.53', '01:05:12', '3.2', '6'] ['2019-11-17', '356.6', '3.56', '01:05:34', '3.3', '6'] ['2019-11-18', '366.7', '3.48', '01:07:25', '3.1', '6'] ['2019-11-19', '369.9', '3.65', '01:08:00', '3.2', '5'] ['2019-11-21', '369.7', '3.72', '01:07:59', '3.3', '5'] ['2019-11-24', '354.1', '3.28', '01:05:06', '3.0', '5'] ['2019-11-25', '352.1', '3.38', '01:04:44', '3.1', '5'] ['2019-11-27', '356.6', '3.43', '01:05:33', '3.1', '5'] ['2019-11-29', '347.0', '3.36', '01:03:48', '3.2', '5'] |
Each row of the csv.reader()
variable, ‘csv_reader’ is in fact, a Python list
:
1 2 | In [5]: print(type(row)) <class 'list'=""> |
I mentioned working in a try/except/else/finally
block and will use that now in the below multi-row INSERT
, passing in the csv_reader
variable to supply the values for executemany()
:
1 2 3 4 5 6 7 8 9 10 11 12 | In [14]: with open(r'/home/joshua/Practice_Data/Fitness_DB_Data/nov_2019_hiking_stats.csv', 'r') as csv_file: ...: csv_reader = csv.reader(csv_file, delimiter=',') ...: try: ...: conn.autocommit = False ...: cur.executemany("""INSERT INTO walking_stats(day_walked,cal_burned,miles_walked,duration,mph,shoe_id) VALUES(?,?,?,?,?,?)""", csv_reader) ...: except pyodbc.DatabaseError as e: ...: raise e ...: cur.rollback() ...: else: ...: cur.commit() ...: finally: ...: conn.autocommit = True |
No error occurred in running executemany()
, therefore, the statement was committed in the else
block. There are 17 rows of data in the csv_reader
object so that now brings the total to 18 rows for the month of November present in the ‘walking_stats’ table.
Let’s verify over in the MySQL command-line client with a couple of queries:
1 2 3 4 5 6 7 8 9 | mysql> SELECT COUNT(*) -> FROM walking_stats -> WHERE MONTHNAME(day_walked) = 'November'; +----------+ | COUNT(*) | +----------+ | 18 | +----------+ 1 row in set (0.00 sec) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | mysql> SELECT * FROM walking_stats WHERE MONTHNAME(day_walked) = 'November'; +------------+------------+--------------+----------+------+---------+ | day_walked | cal_burned | miles_walked | duration | mph | shoe_id | +------------+------------+--------------+----------+------+---------+ | 2019-11-01 | 367.8 | 3.66 | 01:07:37 | 3.2 | 6 | | 2019-11-03 | 365.6 | 3.66 | 01:07:13 | 3.3 | 6 | | 2019-11-04 | 367.9 | 3.49 | 01:07:39 | 3.1 | 6 | | 2019-11-06 | 357.8 | 3.50 | 01:05:47 | 3.2 | 6 | | 2019-11-08 | 354.8 | 3.49 | 01:05:06 | 3.2 | 6 | | 2019-11-10 | 364.9 | 3.64 | 01:07:06 | 3.3 | 6 | | 2019-11-12 | 357.7 | 3.59 | 01:05:46 | 3.3 | 6 | | 2019-11-13 | 364.4 | 3.65 | 01:07:00 | 3.3 | 6 | | 2019-11-14 | 364.2 | 3.67 | 01:06:57 | 3.3 | 6 | | 2019-11-15 | 354.6 | 3.53 | 01:05:12 | 3.2 | 6 | | 2019-11-17 | 356.6 | 3.56 | 01:05:34 | 3.3 | 6 | | 2019-11-18 | 366.7 | 3.48 | 01:07:25 | 3.1 | 6 | | 2019-11-19 | 369.9 | 3.65 | 01:08:00 | 3.2 | 5 | | 2019-11-21 | 369.7 | 3.72 | 01:07:59 | 3.3 | 5 | | 2019-11-24 | 354.1 | 3.28 | 01:05:06 | 3.0 | 5 | | 2019-11-25 | 352.1 | 3.38 | 01:04:44 | 3.1 | 5 | | 2019-11-27 | 356.6 | 3.43 | 01:05:33 | 3.1 | 5 | | 2019-11-29 | 347.0 | 3.36 | 01:03:48 | 3.2 | 5 | +------------+------------+--------------+----------+------+---------+ 18 rows in set (0.00 sec) |
Database connectivity using pyodbc definitely has its merits. I plan to explore and blog about what I learn in a pyodbc/MySQL CRUD style work flow. Want to receive a notification when part 2, Pyodbc CRUD – Read, is published? Subscribe to Digital Owl’s Prose in the sidebar to stay current with the series.
I’d love to know your thoughts about the post in the comment section below so please, comment freely and thanks for reading!
Like what you have read? See anything incorrect? Please comment below and thanks for reading!!!
A Call To Action!
Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.
Visit the Portfolio-Projects page to see blog post/technical writing I have completed for clients.
Have I mentioned how much I love a cup of coffee?!?!
To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)
Be sure and visit the “Best Of” page for a collection of my best blog posts.
Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.
Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, is performed on a personal development/learning workstation-environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.
5 thoughts on “Pyodbc SQL CRUD – Create: Examples with MySQL”