Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. MySQL Upgrade Checker, Review the Report, Before you Upgrade.

    MySQL introduced the MySQL “Upgrade Checker” initially to support upgrades from MySQL 5.7 to 8.0 series. However, the continuous development initiative in the MySQL 8.0 release series has continued this tools usefulness as it can also report on continuous delivered features throughout the MySQL 8.0 release life. Additionally, “Yes”, bug fixes are always part of that development release… Read More »
  2. Enhanced Password Management Systems in MySQL 8: Part 1

    MySQL 8 comes with a lot of good features, and recently I explored its password management systems. I wanted to put together a series of blogs about it, and this is the first part. In this post, I am going to explain the following topics in detail. Password Reuse Policy Random Password Generation Password Reuse Policy MySQL has implemented restrictions on reusing passwords. Restriction can be established in two ways: Number of password changes Time elapsed Number of Password Changes From the MySQL documents: If an account is restricted on the basis of number of password changes, a new password cannot be chosen from a specified number of the most recent passwords. To test this, in my local environment I have created the user with “number of password changes = 2”.mysql> create user 'herc'@'localhost' identified by 'Percona@321' password history 2; Query OK, 0 rows affected (0.02 sec) mysql> select user, host, password_reuse_history from mysql.user where user='herc'\G *************************** 1. row ***************************                   user: herc                   host: localhost password_reuse_history: 2 1 row in set (0.00 sec)Here “password history 2” will define the number of password changes. MySQL will track the password changes on the table “mysql.password_history”.mysql> select * from mysql.password_history; +-----------+------+----------------------------+------------------------------------------------------------------------+ | Host      | User | Password_timestamp         | Password                                                               | +-----------+------+----------------------------+------------------------------------------------------------------------+ | localhost | herc | 2021-09-20 15:44:42.295778 | $A$005$=R:q'M(Kh#D];c~SdCLyluq2UVHFobjWOFTwn2JYVFDyI042sl56B7DCPSK5 | +-----------+------+----------------------------+------------------------------------------------------------------------+ 1 row in set (0.00 sec)Now, I am going to change the password for the account “herc@localhost”.mysql> alter user 'herc'@'localhost' identified by 'MySQL@321'; Query OK, 0 rows affected (0.02 sec) mysql> select * from mysql.password_history\G *************************** 1. row ***************************               Host: localhost               User: herc Password_timestamp: 2021-09-20 15:49:15.459018 CGeRQT31UUwtw194KOKGdNbgj3558VUB.dxcoS8r4IKpG8 *************************** 2. row ***************************               Host: localhost               User: herc Password_timestamp: 2021-09-20 15:44:42.295778           Password: $A$005$=R:q'M(Kh#D];c~SdCLyluq2UVHFobjWOFTwn2JYVFDyI042sl56B7DCPSK5 2 rows in set (0.00 sec)It worked. After changing the password, I verified the “mysql.password_history” table. Now, the table has the track of the last two passwords. Now, I am going to change the password for the account “herc@localhost” again. This time, I am going to assign the same password which was assigned during the user creation “Percona@321”.mysql> alter user 'herc'@'localhost' identified by 'Percona@321'; ERROR 3638 (HY000): Cannot use these credentials for 'herc@localhost' because they contradict the password history policyIt doesn’t work; I am not able to reuse the first password. Because as per my reuse policy, I can’t reuse the last two passwords and they are being tracked in the “mysql.password_policy” table. So, in my case, if I want to reuse my first password again, then it cannot be in that list. So I assigned a different password. Now, my first password is removed from the list of the last two passwords) and I tried to assign the first password.mysql> alter user 'herc'@'localhost' identified by 'Herc@321'; Query OK, 0 rows affected (0.01 sec) mysql> alter user 'herc'@'localhost' identified by 'Percona@321'; Query OK, 0 rows affected (0.02 sec)This is working now. This is the way you can restrict the reuse of the passwords based on the number of password changes. This can be implemented globally and during the startup for all the users using the variable “password_history”.#vi my.cnf [mysqld] password_history=6 #set global mysql> set global password_history=5; Query OK, 0 rows affected (0.00 sec) Password Reuse Policy Based on Time Elapsed From the MySQL document: If an account is restricted based on time elapsed, a new password cannot be chosen from passwords in the history that are newer than a specified number of days. To test this in my local environment, I have created the user “sri@localhost” with a password reuse interval of five days.mysql> create user 'sri'@'localhost' identified by 'Percona@321' password reuse interval 5 day; Query OK, 0 rows affected (0.01 sec) mysql> select user, host, password_reuse_time from mysql.user where user='sri'\G *************************** 1. row ***************************                user: sri                host: localhost password_reuse_time: 5 1 row in set (0.00 sec)So, this means for five days, I can’t reuse the password for the account “sri@localhost”.mysql> select * from mysql.password_history where user='sri'\G *************************** 1. row ***************************               Host: localhost               User: sri Password_timestamp: 2021-09-20 16:09:27.918585           Password: $A$005$+B   e3!C9&8m                                          eFRG~IqRWX4b6PtzLA8I4VsdYvWU3qRs/nip/QRhXXR5phT6 1 row in set (0.00 sec)Now, I am going to do the ALTER to change the password.mysql> alter user 'sri'@'localhost' identified by 'Herc@321'; Query OK, 0 rows affected (0.02 sec) mysql> select * from mysql.password_history where user='sri'\G *************************** 1. row ***************************               Host: localhost               User: sri Password_timestamp: 2021-09-20 16:17:51.840483           Password: $A$005$~k7qp8.OP=^#e79qwtiYd7/cmCFLvHM7MHFbvfX2WlhXqzjmrN03gGZ4 *************************** 2. row ***************************               Host: localhost               User: sri Password_timestamp: 2021-09-20 16:09:27.918585           Password: $A$005$+B   e3!C9&8m                                          eFRG~IqRWX4b6PtzLA8I4VsdYvWU3qRs/nip/QRhXXR5phT6 2 rows in set (0.00 sec)It is working. But, if I am going to reuse any of those passwords, based on the reuse policy, it will not be allowed for five days. Let me try with the first password now.mysql> alter user 'sri'@'localhost' identified by 'Percona@321'; ERROR 3638 (HY000): Cannot use these credentials for 'sri@localhost' because they contradict the password history policyIt gives the error as expected. This restriction can be implemented globally and during startup for all the users using the variable “password_reuse_interval”.#vi my.cnf [mysqld] password_reuse_interval=365 #set global mysql> set global password_reuse_interval=365; Query OK, 0 rows affected (0.00 sec) Random Password Generation From MySQL 8.0.18, MySQL has the capability of creating random passwords for user accounts. This means we don’t need to assign the passwords and MySQL will take care of it. It has the support for the following statements: CREATE USER ALTER USER SET PASSWORD We need to use the “RANDOM PASSWORD” instead of providing the password text, and the password will be displayed on the screen during the creation. For example:mysql> create user 'sakthi'@'localhost' identified by random password; +--------+-----------+----------------------+ | user   | host      | generated password   | +--------+-----------+----------------------+ | sakthi | localhost | .vZYy+<<BO7l1;vtIufH | +--------+-----------+----------------------+ 1 row in set (0.01 sec) mysql> alter user 'sri'@'localhost' identified by random password; +------+-----------+----------------------+ | user | host      | generated password   | +------+-----------+----------------------+ | sri  | localhost | 5wb>2[]q*jbDsFvlN-i_ | +------+-----------+----------------------+ 1 row in set (0.02 sec)The password hashes will be stored in the “mysql.user” table.mysql> select user, authentication_string from mysql.user where user in ('sakthi','sri')\G *************************** 1. row ***************************                  user: sakthi authentication_string: $A$005$L`PYcedj%3tz*J>ioBP1.Rsrj7H8wtelqijvV0CFnXVnWLNIc/RZL0C06l4oA *************************** 2. row ***************************                  user: sri authentication_string: $A$005$/k?aO&ap.#b=                                           ^zt[E|x9q3w9uHn1oEumXUgnqNMH8xWo4xd/s26hTPKs1AbC2 2 rows in set (0.00 sec)By default, the password length is 20 characters based on the variable “generated_random_password_length”. We can define the password length using that variable. and the allowed length is 5 to 255.mysql> select @@generated_random_password_length; +------------------------------------+ | @@generated_random_password_length | +------------------------------------+ |                                 20 | +------------------------------------+ 1 row in set (0.00 sec)The random passwords will not mind the “validate_password” policy if the component is implemented in MySQL. Hopefully, this blog will be helpful for you to learn about the password reuse policy and random passwords in MySQL 8. There are a few more features to go over, which will be covered in the next part of the blog series. Stay tuned! Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use! Download Percona Distribution for MySQL Today
  3. Medium Cross-post – CodeIgniter 4 CRUD: Update

    In some applications, data may never change. Yet, in others, data changes numerous times in its lifecycle. In SQL the UPDATE command changes existing rows of data. CodeIgniter 4 Models have 2 methods available for update operations: update() and save(). Continue reading and learn more about update()… Self-Promotion: If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like! CodeIgniter 4 CRUD Series with MySQL This post is a re-share of an article I originally published on my Medium account and is part 3 in the CodeIgniter 4 CRUD with MySQL series. Be sure and visit the 2 previous blog posts to get up to speed. CodeIgniter 4 CRUD Series with MySQL: CreateCodeIgniter 4 CRUD Series with MySQL: ReadCodeIgniter 4, like most frameworks, provides an ORM (or something comparable) to help ease the pain of working with databases (if that is in fact, pain for you ). In CodeIgniter’s case, there are the Model and Query Builder classes CodeIgniter 4 Model update() method The CodeIgniter Model class has CRUD methods built right in and they are super-easy to employ for those standard CRUD operations. The model update() method accepts 2 parameters for UPDATE operations: the primary key id for the target row to be updated, and an associative array of data with key/value pairs matching the tables’ columns. Learn more about the update() method and see examples in the post, CodeIgniter 4 CRUD Series with MySQL: Update. I write a weekly email about SQL/PHP that I’m studying and interested in. If that sounds like something you would like to be a part of, learn more about it here. Thank you! Further Reading Feel free to visit any of the below posts for similar content: PHP date() function for common date formatsRECURSIVE CTE Fun with SUBSTRING() in MySQLHow to use the LEAD() Window Function and discover gaps in DatesAs always, if you have any questions or see any mistakes in the code, please let me know via the comments. Constructive comments help me provide accurate blog posts and are much appreciated. Consider making a donation as I continue to provide valuable content here on my blog. Thank you so much!!! Every bit is much appreciated and helps tremendously! Like what you have read? See anything incorrect? Please comment below and thank you for reading!!! A Call To Action! Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well. Visit the Portfolio-Projects page to see blog posts/technical writing I have completed for clients. To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…) Be sure and visit the “Best Of” page for a collection of my best blog posts. Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters. Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own. The post Medium Cross-post – CodeIgniter 4 CRUD: Update appeared first on Digital Owl's Prose.
  4. Making Aurora Write Latency 15x Higher (or More!) by Choosing a Bad Primary Key

    Primary Key design is an important thing for InnoDB performance, and choosing a poor PK definition will have an impact on performance and also write propagation in databases. When this comes to Aurora, this impact is even worse than you may notice. In short, we consider a poor definition of a Primary Key in InnoDB as “anything but quasi sequential values”, which may cause very random access to data and thus increase the IO dependency. In this post, I’ll try to demonstrate the potential impact of the primary key design when running on Aurora, and how a bad design can lead to a 15x write latency penalty (or more). The Analysis Recently I worked on a case where a customer was having issues with scaling writes in Aurora MySQL. While this is a known limitation in Aurora considering how the distributed storage layer syncs out data among all nodes of the cluster, we observed additional latency occurring when more clients were writing to the database. The first thing I noticed is that their main table had a poor definition of the Primary Key as they were using UUID-like columns based on VARCHAR data types. In this case, the nature of values for the Primary Key was very random, which is really bad for a b-tree based storage like InnoDB. With this in mind, I referred to a great post from my colleague Yves Trudeau explaining why UUIDs are bad for performance, so based on this premise I decided to try to measure how big this impact can be in the Aurora world. The set of tests I’ve run were using a db.r5.2xlarge Aurora MySQL Cluster (8vCPU and 64GB of ram) which is pretty similar to the cluster my customer was using. First, I’ve started with two very basic tables to avoid any extra overhead but something close to a real case:CREATE TABLE `test_sequential_PK` ( `id` int(11) NOT NULL AUTO_INCREMENT, `number` int(10) NOT NULL, PRIMARY KEY (`id`), KEY `idx_number` (`number`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; CREATE TABLE `test_random_PK` ( `id` varchar(26) NOT NULL, `number` int(10) NOT NULL, PRIMARY KEY (`id`), KEY `idx_number` (`number`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;Then I created two simple lua scripts to execute with sysbench – super simple, I’d say – as they were just for doing inserts using either the auto_increment property in table test_sequential_PK or creating random values for test_random_PK table. The final purpose was not to measure Aurora performance as a whole but the write latency when the Primary Key is not optimal. I’ve started the process by warming up the instances for few days by running both sysbench scripts and pushing the instance really hard while I was filling up the InnoDB Buffer Pool, results were pretty good for some time until the traffic became IO-bound: It took a few days but after some time we started to see an increase in the write latency. I created an initial set of data using 50 concurrent threads, which means the graphic above is not very helpful for the analysis I’m going to make. After I was sure the buffer pool was filled and the instance was warmed up, I verified that the dataset is bigger than the buffer pool:SELECT concat(table_schema,'.',table_name) schema_table,concat(round(table_rows/1000000,2),'M') rows,concat(round(data_length/(1024*1024*1024),2),'G') Data,round(data_length/table_rows,0) DataRow ,concat(round(index_length/(1024*1024*1024),2),'G') idx,round(index_length/table_rows,0) as IdxRow,concat(round((data_length+index_length)/(1024*1024*1024),2),'G') totSize,round(index_length/data_length,2) idxfrac FROM information_schema.TABLES where table_schema in ('percona') ORDER BY data_length+index_length DESC LIMIT 10; +-------------------------------+----------+---------+---------+--------+--------+---------+---------+ | schema_table                  | rows     | Data    | DataRow | idx    | IdxRow | totSize | idxfrac | +-------------------------------+----------+---------+---------+--------+--------+---------+---------+ | percona.test_random_PK        | 1586.25M | 104.53G |      71 | 73.04G |     49 | 177.57G |    0.70 | | percona.test_sequential_PK    | 1840.49M | 54.74G  |      32 | 24.54G |     14 | 79.27G  |    0.45 | +-------------------------------+----------+---------+---------+--------+--------+---------+---------+I’ll explain the difference between table sizes later in this post. After this, I started to run separate tests to check how the write latency is affected by our table design. First I ran sysbench as follows:sysbench /usr/share/sysbench/insert_sequence.lua    --mysql-user=admin --mysql-password=xxxxxx --mysql-host=francisco-sysbench.cluster --mysql-db=percona --report-interval=1   --threads=15  --events=0 --time=0 run We can see that performance remains steady roughly at 124 microseconds average with all threads running so it looks there is no impact for about 24 hours. Then I tested the random insert by running:sysbench /usr/share/sysbench/insert_random.lua    --mysql-user=admin --mysql-password=xxxxxx --mysql-host=francisco-sysbench.cluster --mysql-db=percona --report-interval=1   --threads=15  --events=0 --time=0 run This is a huge impact if you consider that random access is causing the instance to suffer from performance, roughly 15x writing latency increase. These numbers were very impressive compared to my previous experience so, being extra curious, I checked what was reported in Cloudwatch for Aurora Write latency for the previous 3 days. It’s quite clear the 3 stages of the checks: Warming up the instance and buffer pool by pushing really hard with sysbench in both tables (peak of load at the beginning) First round of sequential inserts using sequential write (sequential writes) Last round of random access (random writes) As I said, the code used on each script was very simple. This is on purpose to avoid adding overhead somewhere other than the database. For reference, this is the interesting portion of code for random writes:local inserts = { "INSERT INTO percona.test_random_PK (id, number) VALUES ('%s', %i)" } local rnd_str_tmpl = "###########-###########" function execute_inserts()              local id = sysbench.rand.string(rnd_str_tmpl)              con:query(string.format(inserts[1], id, sb_rand(1, sysbench.opt.table_size))) endAnd for sequential writes:local inserts = { "INSERT INTO percona.test_sequential_PK (number) VALUES (%i)" } function execute_inserts() -- INSERT con:query(string.format(inserts[1], sb_rand(1, sysbench.opt.table_size))) end Conclusion Primary key design for InnoDB tables was largely discussed in several posts, and specially UUID format impact was perfectly described in the post I mentioned, above so there are no surprises. What I’ve found interesting in this analysis is that in Aurora there seems to be an extra impact of this random access. Given that, what’s happening underneath is not quite clear. I just can try to elaborate a theory: In Aurora, there are no dirty pages. Basically, every commit is synchronized to disk immediately so all replicas can be virtually in sync with the source server. This is reducing the chance of hitting the same page in memory every time you perform a write operation so the whole sequence of grabbing the page, placing it in memory, updating information, flushing, and committing synchronously to the block storage is particularly expensive in Aurora. Additionally given that every secondary index adds an entry of the Primary Key, the increase in data size (and thus the disk footprint) can be impacted a lot as we have seen before, so this may also cause an extra disk utilization. So be warned, plan your PKs properly if you don’t want to see a huge performance degradation in AWS Aurora as the database grows and workload is increased. Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use! Download Percona Distribution for MySQL Today
  5. MyDumper 0.11.1 is Now Available

    The new MyDumper 0.11.1 version, which includes many new features and bug fixes, is now available.  You can download the code from here. For this release, there are three main changes: 1) we added config file functionality which allows users to set session-level variables (one of the most requested features!), 2) we developed a better and robust import mechanism, and 3) we fixed all the filename related issues.  Those changes and mostly the last one forced us to change the version number from 0.10.9 to 0.11.1 as a backup taken in 0.10.x will not work in 0.11.x and vice versa. New Features: Adding order by part functionality #388 improve estimated_step #376 #373 Adding config file functionality #370 Use only a config file to load the parameters #318 We hope to add parameters and support custom SQL mode and character configuration #274 [myloader] Add option to disable Galera Cluster (Wsrep) replication before importing #159 mydumper can’t recreate mysql.proc + mysql.events due to 5.7 sql_mode #142 trouble with global sql_big_selects=0 #50 Enabling MAX_EXECUTION_TIME option #368 mydumper dump big table failed, because of dumping time longer than max_execution_time #257 trouble with global sql_big_selects=0 #50 We hope to add parameters and support custom SQL mode and character configuration #274 Adding sync mechanism and constraint split #352 [fast index] Deadlock found when trying to get lock #342 usingFastIndex: indexes may be created while restore is in progress #292 [enhancement] optimized data loading order #285 Enhancement request: myloader option to prioritize size #78 Table and database names changed to fix several issues #399 bug for overwrite-tables option #272 Problems related to log output when backing up table name with #179 #210 parallel schema restore? #169 #311 bug with table name #41 #56 Export with table folders #212 bug for overwrite-tables option #272 [BUG] table name make mydumper fail #391 [BUG] table name starts with checksum breaks myloader #382 Fix issue #182 for tables with quotes #258 Dump procedures and functions using case sensitive database name #69 Bug Closed: [BUG] Error occurs when using –innodb-optimize-keys and there is a column having a foreign key and part of a generated column #395 Tables without indexes can cause segfault #386 Fixes: [ Fix ] on table name segfault and add free functions #401 Adding the set session execution #398 [ Fix ] When no index, alter_table_statement must be NULL #397 Avoid logging SQL data on error #392 [ fix ] The count was incorrect #390 Fix on Debian version #387 Fix package URLs #384 Change quotes to backticks in SQL statements #350 bug for overwrite-tables option #272 mydumper: fix an ‘SQL injection’ issue when table name contains a ‘ or #168 Refactoring Reading file list only one time #394 Export with table folders #212 Question Addressed: [QUESTION] Increase CPU usage Tuning restore #380 [Question] how to use mydumper to backup 2 tables with different where conditions? #377 column charset is ignored once it’s different from table charset. #156 mydumper/myloader feature requests #84 load Generated Columns error #175 Download MyDumper 0.11.1 Today!

Upcoming Events

Visitors

We have 51 guests and no members online