X

MySQL and HeatWave

Recent Posts

Community

MySQL Database Service Replication: easy deployment with Terraform

Recently, we discovered how to setup Disaster Recovery in OCI MySQL Database Service using inbound/outbound replication. In this article we will see how we can deploy MDS replication architecture very easily using Terraform and OCI Resource Manager. The current modules available as example on my github (https://github.com/lefred/oci-mds-multi-region) allows you to deploy architectures like these: We can deploy replica for a single MySQL Instance or for a MySQL HA Instance. Let’s see how do deploy such solution using OCI Resource Manager Stack. To start, just click on this button below: This will open the OCI dashboard console and load the Terraform modules. You need to acknowledge the Oracle Terms of Use: And you can provide a different name if you want: Then we need to fill the required variables, only 5 ! For this example, I will create 2 MDS instances. The Source will be in London and the Replica in Amsterdam: I’m fine with the proposed shape selection, so I leave the default values: After clicking on Next, I can see the summary and I just click on Create: Now, I need to wait while all resources are provisioned in OCI… After a while, we will get something like this if all succeeded: In the outputs section, we can see the necessary information to connect to our system: As you can see, a compute instance with MySQL Shell is also deployed on each region. In this example, if we go in the Amsterdam region, we can see a MDS instance having an active replication channel from London: As you can see, it was very easy to deploy such architecture with a connection between 2 regions in OCI and enable MySQL replication and it took about 8 minutes only: In the next article, I will show you how to deploy an architecture like this using Terraform manually from your laptop. Enjoy MySQL and MySQL Database Service on OCI !

Recently, we discovered how to setup Disaster Recovery in OCI MySQL Database Service using inbound/outbound replication. In this article we will see how we can deploy MDS replication architecture very...

Community

MySQL Database Service inbound replication channel troubleshooting

Using a MySQL Database Service instance in OCI as an asynchronous replica is very useful. It allows testing the service with always updated data from on-premise production. It can be used in the process of migrating with minimal downtime to OCI and finally, it can also be used between MDS instances on different regions for DR, analytics, etc… Here are some links to the documentation and other resources about inbound replication channel and how to use it: https://docs.oracle.com/en-us/iaas/mysql-database/doc/replication.html https://blogs.oracle.com/mysql/migrate-from-on-premise-mysql-to-mysql-database-service https://blogs.oracle.com/mysql/setup-disaster-recovery-for-oci-mysql-database-service https://www.slideshare.net/lefred.descamps/mysql-database-service-webinar-upgrading-from-onpremise-mysql-to-mds In this article, we consider that you already have everything ready to replicate from the source to a MDS instance. This means that the network part (VCNs, eventual VPN, routing, security list) and the replication user and credentials have been configured. We will consider errors like described in this schema: But before checking what kind of error we can encounter, let’s see how we can verify the status of the inbound replication channel. Inbound Replication Channel Status The first source to see if the replication channel is running, is the OCI MDS console: We can also use the oci SDK to get that info. For example, from the cloud shell, you can get the status using the channel’s ocid: $ oci mysql channel get --channel-id <channel_ocid> | grep lifecycle-state "lifecycle-state": "NEEDS_ATTENTION" Let’s see an example: However, we have no idea of what is wrong, but we know we need to check. Please note that the status on the console is not updated in real-time and can take up to 10mins before noticing a problem. I encourage you to check the MySQL Replication Channel status directly from the replica instance itself (more on that later). How to know what is the problem ? The only way to see what is the problem, is to connect on the MDS replica instance and verify using our client. I use MySQL Shell. Usually replication problems are reported in 4 different locations: output of SHOW REPLICA STATUS\G (which is the most commonly used but not recommended) performance_schema.replication_connection_status performance_schema.replication_applier_status_by_coordinator and performance_schema.replication_applier_status_by_worker MySQL’s error log (available using performance_schema.error_log in MDS) This is an example: SQL > select * from performance_schema.error_log where SUBSYSTEM = 'Repl' and PRIO = 'Error' order by 1 desc limit 1\G *************************** 1. row *************************** LOGGED: 2021-09-23 08:37:22.602686 THREAD_ID: 51 PRIO: Error ERROR_CODE: MY-010584 SUBSYSTEM: Repl DATA: Slave SQL for channel 'replication_channel': Worker 1 failed executing transaction 'b545b0e8-139e-11ec-b9ee-c8e0eb374015:35' at master log binlog.000008, end_log_pos 32561; Could not execute Write_rows event on table rss.ttrss_feeds; Duplicate entry '2' for key 'ttrss_feeds.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log binlog.000008, end_log_pos 32561, Error_code: MY-001062 We will cover the following issues: Network Issues Credentials Issues No dataset definition Missing Binlogs Other Replication Problems related to statements Now let’s focus on the first category of Inbound replication issues. Replication Channel does not start When you have created the replication channel and it immediately complain, it’s usually caused by one of these two issues: the replica cannot connect to the source due to a network issue the replica cannot connect to the source due to wrong credentials Network Issue If you have a network issue, meaning that MDS is not able to connect on the host and port you defined when creating the inbound replication channel. The only way to fix this is to control the security list and eventual routing. If we check the 4 sources of information we will see the following information: From SHOW REPLICA STATUS\G (last time I use it!): SQL > show replica status\G *************************** 1. row *************************** Replica_IO_State: Connecting to source Source_Host: 172.27.240.3 Source_User: repl_mds Source_Port: 3306 Connect_Retry: 60 ... Replica_IO_Running: Connecting Replica_SQL_Running: Yes ... Last_IO_Errno: 2003 Last_IO_Error: error connecting to master '[email protected]:3306' - retry-time: 60 retries: 2 message: Can't connect to MySQL server on '172.27.240.3:3306' (113) And from Performance_Schmema table: SQL > select * from performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: replication_channel GROUP_NAME: SOURCE_UUID: THREAD_ID: NULL SERVICE_STATE: CONNECTING COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: LAST_ERROR_NUMBER: 2003 LAST_ERROR_MESSAGE: error connecting to master '[email protected]:3306' - retry-time: 60 retries: 6 message: Can't connect to MySQL server on '172.27.240.3:3306' (113) LAST_ERROR_TIMESTAMP: 2021-09-23 12:13:11.041796 LAST_QUEUED_TRANSACTION: LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 This table refers to the IO_Thread. There is no need to check the content of performance_schema.replication_applier_status_% that are in fact related to the SQL_Thread. Credentials Issues When the network issue is resolved, we might also encounter authentication issues like wrong user, wrong password… or SSL issues. In that case you will see the following error: SQL > select * from performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: replication_channel GROUP_NAME: SOURCE_UUID: THREAD_ID: NULL SERVICE_STATE: CONNECTING COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: LAST_ERROR_NUMBER: 1045 LAST_ERROR_MESSAGE: error connecting to master '[email protected]:3306' - retry-time: 60 retries: 1 message: Access denied for user 'repl_mds'@'10.0.1.115' (using password: YES) LAST_ERROR_TIMESTAMP: 2021-09-23 12:20:18.405572 LAST_QUEUED_TRANSACTION: LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 You need to verify the user and password and eventually recreate them. Also, check that you are using the right SSL mode in the Inboud Replication Channel’s console: In the previous error, I had just used Disabled but the replication use was created with REQUIRE SSL. Before checking the next category of errors, I would like to introduce some functions and procedures I created to make my life easier. Those functions and procedures I created them in a dedicated schema that I called mds. Every time you see mds. prefixing a function or a store procedure, this is just something I’ve added. You can add those additions too if you find them useful: mds_functions.sql If you prefer to not add any functions and procedures to MDS, you can also use MySQL Shell replication plugin. This is an example: SQL > call replication_status; +---------------------+-----------+------------+------------+ | channel_name | IO_thread | SQL_thread | lag_in_sec | +---------------------+-----------+------------+------------+ | replication_channel | ON | OFF | 11796 | +---------------------+-----------+------------+------------+ 1 row in set (0.0984 sec Replication Stops Now we will see why the replication channel stops and how to fix it. Once again, we can divide this category in two sub-categories: stops immediately after first start stops unexpectedly while is was running Inbound Replication stops after first start When Inbound Replication stops after having started for the first time, the 2 common reasons are: user didn’t tell to MDS instance the status of the initial load the source doesn’t have the required data anymore in its binary logs No dataset definition Usually, when you create a replica in MDS, you need first to perform a dump using MySQL Shell and loading it in MDS. If you use Object Storage to dump the data, when you create the MDS instance, you can specify the initial data to load. This is the recommended way. If that method is used, there won’t be any problem regarding the state of the dataset imported (GTID purged). However, if you do that initial import manually using MySQL Shell or any other logical method, you will have to tell MDS what are the GTIDs defining that dataset. If you don’t, once Inbound Replication will start, it will start to replicate from the first binlog still available on the source and most probably try to apply transactions already present in the dataset and fail with a duplicate entry error or a missing key. To illustrate this, I imported manually a dump from the MySQL source server and then I added an inbound replication channel. Replication started and then failed with the following error message: Slave SQL for channel 'replication_channel': Worker 1 failed executing transaction 'b545b0e8-139e-11ec-b9ee-c8e0eb374015:35' at master log binlog.000008, end_log_pos 32561; Could not execute Write_rows event on table rss.ttrss_feeds; Duplicate entry '2' for key 'ttrss_feeds.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log binlog.000008, end_log_pos 32561, Error_code: MY-001062 We can see that this is indeed a duplicate entry error: HA_ERR_FOUND_DUPP_KEY. If we use the mds.replication_status_extended procedure, we will see the following: SQL > call mds.replication_status_extended \G *************************** 1. row *************************** channel_name: replication_channel IO_thread: ON SQL_thread: OFF last_queued_transaction: b545b0e8-139e-11ec-b9ee-c8e0eb374015:105 last_applied_transaction: b545b0e8-139e-11ec-b9ee-c8e0eb374015:34 rep delay (sec): 1999483.584932 transport time: 209.787793 time RL: 0.000008 apply time: 0.002185 lag_in_sec: 172787 And we can also verify the value of gtid_executed and gtid_purged: SQL > select @@gtid_executed, @@gtid_purged\G *************************** 1. row *************************** @@gtid_executed: 34337faa-1c79-11ec-a36f-0200170403db:1-88, b545b0e8-139e-11ec-b9ee-c8e0eb374015:1-34 @@gtid_purged: Oups… we realize that we forgot to purge the GTIDs present in the dump. We can now retrieve that info from the dump itself: So we need to fix that. In MDS, you are allowed to change the value of GTID_PURGED using sys.set_gtid_purged(): SQL > call sys.set_gtid_purged("b545b0e8-139e-11ec-b9ee-c8e0eb374015:35-101"); Note that I started from :35 You can now resume the replication channel from the OCI console. As for sys.set_gtid_purged(), it will be very nice to have such procedure to restart the Replica_SQL_Running thread. It’s common to see similar contradiction as the status on the console can take several minutes to be updated… this can lead to huge replication lag in case of a small issue (imagine is there are 2 small consecutive issues… extra 20minutes to restart replication!) Missing Binlogs Another common error, is to use a too old dump or not keeping the binary logs long enough on the source. If the next transaction if not present anymore in the source’s binlogs, the replication won’t be possible and the following error will be returned: last_error_message: Got fatal error 1236 from master when reading data from binary log: 'Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. The GTID set sent by the slave is '4b663fdc-1c7e-11ec-8634-020017076ca2:1-100, b545b0e8-139e-11ec-b9ee-c8e0eb374015:35-101', and the missing transactions are 'b545b0e8-139e-11ec-b9ee-c8e0eb374015:1-34:102-108'' Nothing can be done if you want to keep data integrity. The only solution is to create the dump again and insure to not purge the binary logs that will be required. Other Replication Problems Inbound replication channel can also break later when everything is working fine. Let’s see how we can deal with that. On the on-premise source I run this query: SOURCE SQL > create table oups (id int) engine=MyISAM; And now on my replica in MDS: MDS REPLICA SQL > call replication_status; +---------------------+-----------+------------+------------+ | channel_name | IO_thread | SQL_thread | lag_in_sec | +---------------------+-----------+------------+------------+ | replication_channel | ON | OFF | 142 | +---------------------+-----------+------------+------------+ 1 row in set (0.1291 sec) Let’s verify what is the problem: MDS REPLICA SQL > call replication_errors()\G Empty set (0.1003 sec) *************************** 1. row *************************** channel_name: replication_channel last_error_timestamp: 2021-09-23 16:09:59.079998 last_error_message: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'b545b0e8-139e-11ec-b9ee-c8e0eb374015:121' at master log binlog.000014, end_log_pos 6375. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any. 1 row in set (0.1003 sec) *************************** 1. row *************************** channel_name: replication_channel last_error_timestamp: 2021-09-23 16:09:59.080003 last_error_message: Worker 1 failed executing transaction 'b545b0e8-139e-11ec-b9ee-c8e0eb374015:121' at master log binlog.000014, end_log_pos 6375; Error 'Storage engine MyISAM is disabled (Table creation is disallowed).' on query. Default database: 'test'. Query: 'create table oups (id int) engine=MyISAM' 1 row in set (0.1003 sec) Query OK, 0 rows affected (0.1003 sec) We can get some more information about the GTIDs too: We need to skip that transaction on our replica and then maybe create the table manually or recreate it as InnoDB on the source. To skip it, we need to deal with GTID_NEXT and genera an empty transaction. I’ve created a procedure to perform all the steps for me: MDS REPLICA SQL > call skip_replication_error(); +------------------------------------------+ | skipped GTID | +------------------------------------------+ | b545b0e8-139e-11ec-b9ee-c8e0eb374015:121 | +------------------------------------------+ 1 row in set (0.1080 sec) This is an example using MySQL Shell Plugin: And now, you can go on the console and as soon as you can resume the replication channel: Let’s see what can break MDS Inbound Replication that usually doesn’t break MySQL Asynchronous replication: FLUSH PRIVILEGES FLUSH STATUS Create something (table, view, function, procedure) in mysql or sys Unsupported STORAGE ENGINE Unsupported or removed syntax (like creating a user from GRANT statement), … see the full limitations in the manual. This is an old bad practice to use the FLUSH statement without specifying to be LOCAL as many FLUSH statements are replicated. See the manual. It’s time to change your habit and specify the LOCAL keyword to all your FLUSH statements to be sure they don’t get replicated (this is also recommended in any replication topology, not only with MDS). So FLUSH PRIVILEGES becomes FLUSH LOCAL PRIVILEGES ! Conclusion MySQL Database Service Inbound Replication is already great between MDS servers (for DR, etc..) but the DBA must be very attentive when having to deal with external load (on-premise, other clouds, …), as some bad practices won’t be allowed (old engines, FLUSH without LOCAL, adding stuff in system schema like mysql or sys). What I’m really looking forward for Inbound Replication Channel is: being able to restart it more quickly from SQL being able to use filters (would be very useful for migration from RDS and Aurora to MDS) the possibility to set the replica READ_ONLY being supported in MDS HA Thank you for using MySQL and MySQL Database Service.

Using a MySQL Database Service instance in OCI as an asynchronous replica is very useful. It allows testing the service with always updated data from on-premise production. It can be used in the...

Announcements

Where’s the MySQL team from October - December 2021

As a continue of the previous announcement from July 1st, 2021, please find below the list of shows where you can find MySQL Community and/or the MySQL team during the time of October - December 2021: October 2021: OpenSource Conference India (OSI Days) / Virtual, October 7-8, 2021 MySQL is going to have a virtual booth & 2 MySQL talks at OSI Days, one Keynote and second technical talk as follows: Keynote: "Why MySQL remains the OpenSource database of choice", given by Sanjay Manwani, the Senior Director of MySQL India. Session: "MySQL Database Service with HeatWave", given by Jothir Ganesan, the MySQL India Software Development Manager, SRE. For the timing & further details, please check organizers' website. PyCon HK / Virtual, October 8-9, 2021 Ivan Ma, the MySQL Master Principal Solution Engineer will run the session in Cantonese on following topic: "MySQL Shell is your python friend #2", where you can learn about shared Python with MySQL Shell. During the session Ivan will cover in-depth details about MySQL Shell Dump & Load for Data Migration, Dump & Load with MySQL. Come and Join us to share your experience with Python & MySQL Shell. Ryan Kuan, the MySQL Master Principal Solution Engineer will lead the session in English on "MySQL Operator for Kubernetes", where attendees can hear about MySQL Operator written in Python allowing operation to MySQL in InnoDB Cluster provision, scaling, microservices to be handy and administrative handy. Come and join us to learn more about the latest MySQL Operator in Action. For the timing & further details, please check organizers' website. Longhorn PHP / Combo event, October 14-16, 2021 David Stokes, the MySQL Community Manager will run a virtual workshop on "MySQL Indexes, Histograms, Locking Options, and Other Ways to Speed Up Your Queries" and a regular virtual session on "PHP & MySQL -- How do PDO, MySQLi, and X DevAPI do what they do". For timing and details, please watch organizers' website. All Things Open / Combo event, October 17-19, 2021 David Stokes, the MySQL Community Manager will have 2 virtual talks. One on "Database Basics for New-ish Developers" (Oct 18@4:00-4:45pm), and second on "MySQL Indexes and Histograms -- Speeding up your queries" (Oct 19@4:00-4:45pm). Big Mountain Data & Dev Conference / Combo, October 22-23, 2021 David Stokes, the MySQL Community Manager will be talking virtually on "Indexes vs. Histograms to Speed Up Database Searches".  For the timing & further details, please check organizers' website. Open Source Conference Fall / Virtual, October 22-23, 2021 Do not miss the talk of "Introduction to MDS/HeatWave and MySQL update" by Machiko Ikoma, the MySQL Principal Solution Engineer For the timing & further details, please check organizers' website. MOPCON / Virtual, October 23-24, 2021 Ivan Ma, the MySQL Master Principal Solution Engineer will be talking about "K8S MySQL operator". For the timing & more details, please check organizers' website. GroupBy Americas / Virtual, October 26-27, 2021 David Stokes, the MySQL Community Manager will be talking on "Indexes vs. Histograms to Speed Up Database Searches". Please watch organizers' website for the timing update. Texas Cyber Summit / Virtual, October 29-30, 2021  David Stokes, the MySQL Community Manager will talk on "Database Analytics with Windowing Functions -- Discovering what is in your database". The talk is scheduled for: October 29th at 10:00am to 11:00am CDT. (1.0 hrs). November 2021: Kansai Open Forum / Virtual, November 13, 2021 At this Open Forum Yoshiaki Yamasaki, the MySQL Principal Solution Engineer will talk on "Introduction to MDS and HeatWave". For the timing & more details, please check organizers' website. DOAG 2021 / Virtual, November 16-18, 2021 Carsten Thalheimer, the MySQL Principal Solution Engineer will be talking on "MySQL in Kubernetes-Environments". His talk is scheduled for November 17@12:00pm. db tech showcase / Virtual, November 17-19, 2021 You can find 2 already set sessions out of our 5 approved ones as follows: "MySQL Replication in InnoDB Cluster era", given by Yoshiaki Yamasaki, the MySQL Principal Solution Engineer & "Getting Started with MySQL Analytics - MySQL HeatWave", by Machiko Ikoma, the MySQL Principal Solution Engineer ...more will come soon, please check organizers' website for further updates.  Zabbix conference Tokyo / Virtual, November 18-19, 2021 We are still considering being part of this event with our local MySQL group. Our attendance will be confirmed soon. JCConf / Virtual, November 19, 2021 Do not miss the talk of "MySQL Operator for K8S " by 杜修文 / Ivan Tu, the MySQL Solution Engineering Manager. The talk is scheduled for Nov 19 @14:00-14:45. OSC Fukuoka / Virtual, November 20, 2021 Yoshiaki Yamasaki, the MySQL Principal Solution Engineer will run a MySQL session which is still TBC. Please watch organizers' website for further update. Zabbix Summit / Virtual, November 25, 2021 Vittorio Cioe, the MySQL Solution Engineer & Sales Consulting will be having a talk on "Zabbix, MySQL, Oracle Cloud: how infrastructure becomes code!". Please watch organizers' website for further updates. December 2021:  RIOS OpenSource Week / Virtual, Nov 30 - Dec 2, 2021 Our attendance/talk at this show is now in the process, but we will do our best to be part of this show. Please watch organizers' website for further updates. Ohio LinuxFest 2021 / Virtual, December 3-4, 2021 Our attendance/talk at this show is now in the process. Please watch organizers' website for further updates. IT Tage / Virtual, December 6-9, 2021 Carsten Thalheimer, the MySQL Principal Solution Engineer submitted a talk on "Der MySQL Datenbank und HeatWave Cloud Service - Wo stehen wir Ende 2021?", however we have not yet heard back about its acceptance. Please watch organizers' website for further updates.

As a continue of the previous announcement from July 1st, 2021, please find below the list of shows where you can find MySQL Community and/or the MySQL team during the time of October - December...

MySQL

Dave's MySQL Quiz #3 -- JSON data conversion

 This weeks quiz is a little different.  Many of you grew up with MySQL's sample world database but have not used the world_x database (see https://elephantdolphin.blogspot.com/2021/09/sample-databases-for-learning-mysql-and.html for information on both sample data sets) and may be shocked to find that there is now a forth table to the database and some of the information is now stored in a JSON data type column.      The quiz is to take the following code that works with the world database and convert it to work with the world_x database. SELECT city.Name as 'City'     , city.Population as 'city pop'     , country.name as 'country'     , country.Population as 'country pop'     , round(city.Population / country.Population * 100,2) as 'Percent'   FROM city  JOIN country on (city.CountryCode = country.Code)  ORDER BY Country.Name,  city.name LIMIT 10;       The output should look like this:   Hint #1 -- The data has moved in some cases Hint #2 -- You can use JSON_UNQUOTE(JSON_EXTRACT(...)) or the ->> operator (See https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract and https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-inline-path) There is a solution below and your solution may be different than what is presented.     Solution: SELECT city.name as 'City'     ,city.Info->>"$.Population" as 'city pop'     ,countryinfo.doc->>"$.Name" as 'country'     ,countryinfo.doc->>"$.demographics.Population" as 'country pop'     ,round(city.Info->>"$.Population" / countryinfo.doc->>"$.demographics.Population" * 100,2) as 'Percent'   FROM city  JOIN countryinfo on (city.CountryCode = countryinfo.doc->>"$._id")  ORDER BY countryinfo.doc->>"$.Name",  city.name LIMIT 10;  

 This weeks quiz is a little different.  Many of you grew up with MySQL's sample world database but have not used the world_x database (see https://elephantdolphin.blogspot.com/2021/09/sample-databases...

Community

Dave's MySQL Quiz #2

This week's MySQL uses the Sakila database (See below for details on obtaining and installing this sample database)  and this week's quiz is a  great one for those wanting to move from beginner level SQL development to a higher level.  There will be lots of tables to joins.  And depending on how you decided to the solve this problem you may also end up using window functions and groupings.         The problem:  Find the customers with more than one unreturned rented movies that are past their return due date.  You might to look at this for the example of finding an overdue rentals.  It shows you a similar query on which you could base your solution and it does  show you how the various tables can be joined together to get the desired solution.      You will need to display the customer's ID number, the number of overdue videos, and the names of the videos!  Bonus points for the customer name!   The Sakila Example Database: See https://dev.mysql.com/doc/sakila/en/ for information on obtaining, installing, and using this example database.  Once upon a time before streaming services were popular, if you wanted to see a video movie in your home you had to first go to a store filled with BETAMAX and VHS format tapes and rent them.  The Sakila database is data representing the operations of such a store.  This is a bigger set of data than the previous two and provides twenty three tables giving novices an opportunity to join more tables together.     And Sakila is the name of the MySQL Dolphin Mascot  A Solution: Please note that your solution may be different than this which hopefully shows the amazing flexibility of Structured Query Language.    First we need to get the customer_id from the customer table.  Then it takes a bit of struggle to get the information on the rental.   It is often easier to write queries by determining the needed output columns, then the 'qualifiers'  or stuff on the right of the WHERE clause before determining what has to be joined to get between the two.   The part of the query to find the overdue entries requires the rental date where it and the length of rental time are before the current date.  r.rental_date + INTERVAL f.rental_duration DAY < CURRENT_DATE()   Finding the those customers with more than one rental needs to have the count of r.rental_id greater than 1.    So besides the rental table, we will need the inventory table to tie between the rental table and the film table.     SQL > select c.customer_id,     sum(count(r.rental_id)) over (partition by c.customer_id) as'total',   r.rental_id, group_concat(f.title order by f.title) as 'titles'  from rental r    join inventory i on (r.inventory_id=i.inventory_id)    join film f on i.film_id = f.film_id    join customer c on r.customer_id = c.customer_id    where r.return_date IS NULL    AND r.rental_date + INTERVAL f.rental_duration DAY < CURRENT_DATE()  group by c.customer_id   having count(r.rental_id) >  1   order by c.customer_id; +-------------+-------+-----------+----------------------------------------+ | customer_id | total | rental_id | titles                                 | +-------------+-------+-----------+----------------------------------------+ |          15 |     2 |     13798 | CANDIDATE PERDITION,SMOKING BARBARELLA | |          42 |     2 |     13351 | RIVER OUTLAW,TORQUE BOUND              | |          43 |     2 |     15644 | MOTIONS DETAILS,POLLOCK DELIVERANCE    | |          53 |     2 |     11657 | LAWLESS VISION,PEACH INNOCENT          | |          60 |     2 |     12489 | BOOGIE AMELIE,CHAMBER ITALIAN          | |          75 |     3 |     13534 | LUST LOCK,SLEEPY JAPANESE,TROUBLE DATE | |         107 |     2 |     13079 | BLADE POLISH,CLUB GRAFFITI             | |         155 |     2 |     11496 | CHASING FIGHT,HYDE DOCTOR              | |         163 |     2 |     11754 | HOLES BRANNIGAN,SONS INTERVIEW         | |         175 |     2 |     13161 | DEER VIRGINIAN,PIRATES ROXANNE         | |         208 |     2 |     13719 | CURTAIN VIDEOTAPE,SEATTLE EXPECATIONS  | |         216 |     2 |     11676 | SWEDEN SHINING,WOMEN DORADO            | |         228 |     2 |     12672 | CYCLONE FAMILY,GRAPES FURY             | |         267 |     2 |     12066 | LUST LOCK,PHILADELPHIA WIFE            | |         269 |     2 |     12610 | PRINCESS GIANT,THEORY MERMAID          | |         284 |     2 |     12064 | BERETS AGENT,FRIDA SLIPPER             | |         354 |     2 |     11782 | TITANIC BOONDOCK,TROJAN TOMORROW       | |         361 |     2 |     13298 | HALF OUTFIELD,INSECTS STONE            | |         448 |     2 |     13577 | FAMILY SWEET,STATE WASTELAND           | |         457 |     2 |     12645 | CLEOPATRA DEVIL,GLEAMING JAWBREAKER    | |         516 |     2 |     12130 | FALCON VOLUME,MINORITY KISS            | |         560 |     2 |     12116 | MOVIE SHAKESPEARE,PIANIST OUTFIELD     | |         576 |     2 |     11942 | TITANIC BOONDOCK,VANISHED GARDEN       | +-------------+-------+-----------+----------------------------------------   Bonus:  Add the customer name   select c.customer_id,   concat(c.first_name, ' ', c.last_name) AS 'Customer Name',   sum(count(r.rental_id)) over (partition by c.customer_id) as 'tots',    r.rental_id, group_concat(f.title) from rental r   join inventory i on (r.inventory_id=i.inventory_id)   join film f on i.film_id = f.film_id   join customer c on r.customer_id = c.customer_id   where r.return_date IS NULL   AND r.rental_date + INTERVAL f.rental_duration DAY < CURRENT_DATE()   group by c.customer_id  having count(r.rental_id) >  1  order by c.customer_id;

This week's MySQL uses the Sakila database (See below for details on obtaining and installing this sample database)  and this week's quiz is a  great one for those wanting to move from beginner level...

MySQL HeatWave : Scale Out Management

  Introduction MySQL is the world's most popular open source database because of its reliability, high-performance, and ease of use. MySQL has been designed and optimized for transaction processing and enterprises around the world rely on it. With the introduction of HeatWave in MySQL Database Service, customers now have a single database which is efficient for both transaction processing and analytics. It eliminates the need for ETL to a specialized analytic database and provides support for real-time analytics. HeatWave is built on an innovative, in-memory query engine which is architected for scalability and performance and is optimized for the cloud. MySQL HeatWave service is faster than other database services – Snowflake, Redshift, Aurora, Synapse, Big Query, Clickhouse - at a fraction of the cost. In order to process data with MySQL HeatWave, data needs to be first inserted into the MySQL database where the data is persisted and then data needs to be read from the database into the memory of the HeatWave cluster. Loading data into the MySQL database typically occurs once, but data may need to be loaded from the database to the HeatWave cluster often for a variety of operations (Figure 1). This includes scheduled operations like database upgrade, stop and resume of a cluster; or it could be an unscheduled event like recovering from a node failure. Figure 1. Loading data into MySQL database and HeatWave Scale-out Data Management MySQL InnoDB database stores data in row-based format, while HeatWave stores data in memory in a hybrid columnar format. Loading data from MySQL to HeatWave involves transforming data into HeatWave columnar format. To increase service uptime, HeatWave introduced a new storage layer for optimization that is built on OCI Object Storage. The first time data is loaded from the MySQL database to the memory of the HeatWave cluster, a copy of the in-memory representation is made to the HeatWave storage layer. If there is a need to reload data into the HeatWave cluster, which could happen for a variety of reasons like recovering from a node failure, the data is read from the HeatWave storage layer instead of the MySQL database. Any changes which are made to the data in MySQL database are transparently propagated to the HeatWave storage layer. Figure 2. HeatWave scale-out storage layer Figure 2 shows the new HeatWave architecture with the scale-out storage layer. In the HeatWave storage layer, persisted data is organized in the same way as that of in-memory data. Here are the advantages of this architecture: 1.      Since the data in the HeatWave storage is stored in the in-memory format, there is no transformation of data required when loading the data from the HeatWave storage into the memory of the HeatWave cluster. As a result data can be loaded at near object store / network bandwidth. 2.      Data in the HeatWave storage is stored in the OCI object store in multiple volumes. When data is loaded into the HeatWave cluster, it can be read by all the nodes of the HeatWave nodes in parallel. So if the data size is large, more HeatWave nodes can read the data. This scale out design, where the time it takes to load any amount of data is constant and is equal to the amount of time it takes of load one HeatWave node as shown in Figure 3. Figure 3. Reloading data from HeatWave store is done in parallel 3.      Data can be read from the HeatWave storage in fine grained chunks – i.e. the entire table does not need to be read. For example, if one of the nodes in the HeatWave cluster needs to be loaded while recovering from a node failure, only the data for that node needs to be read from the HeatWave storage (Figure 4). This significantly cuts down on the network traffic and improves performance. Figure 4. Data from HeatWave storage can be loaded selectively 350x Performance Improvement for Reloading Data The time it takes to load data from the HeatWave storage is constant and data can be reloaded at near OCI object store bandwidth. In the larger OCI regions, the time to load data is about 4 min. The chart below compares the performance of reloading time from MySQL InnoDB versus when data is reloaded from HeatWave storage. For a HeatWave cluster with 64 nodes with 32TB of data, the time reduces from 24 hours down to 4 min – a 350x improvement.  Figure 5. Improved performance and availability with HeatWave storage  This reduction in time to reload data into the HeatWave cluster, improves the availability of the HeatWave cluster. Auto Change Propagation Data updated in MySQL is propagated and persisted to the HeatWave data layer as change logs. During data reload, HeatWave first restores data from the base data, then applies the data from the change logs. Over time, the persisted changelog volume increases, which can result in an increased reload time as all the change logs need to be applied to the base data. So, the change logs are consolidated from time-to-time to alleviate increased reload latency as shown in Figure 6. However, determining when to consolidate is not an easy task, which depends on several factors such as transaction rate, system load, failure probability. Figure 6. Auto change propagation To minimize consolidation time during reloading from the storage layer, auto change propagation uses a data driven mechanism to determine the best change propagation interval and choice. Auto change propagation analyzes rate of changes, incoming DMLs, object storage resources, and previously seen change activity.  As a result, the changes are propagated at the best time interval, which results in optimized consolidation time for critical system operations. Conclusion MySQL HeatWave is the only MySQL based database which provides efficient support for transaction processing, analytics and mixed workloads. As a result, customers are moving larger data sets to the MySQL database service. MySQL HeatWave scale out storage provides an efficient mechanism to reload data into the HeatWave cluster and can reduce the load time by 350x. This reduction in the load time translates to improved availability of the HeatWave cluster. MySQL Autopilot ensures that the data in the HeatWave storage is kept in sync with the MySQL database.   Additional References: Watch the MySQL HeatWave Scale Out Management webinar Learn more about MySQL Database Service Learn more about MySQL HeatWave Try it free today!

  Introduction MySQL is the world's most popular open source database because of its reliability, high-performance, and ease of use. MySQL has been designed and optimized for transaction processing and...

Community

Import data from Amazon Aurora PostgreSQL to MySQL Database Service in OCI

In previous posts we already saw how to export data from PostgreSQL and AWS RedShift, and import it on MySQL Database Service in OCI using MySQL Shell importTable utility: How to import data from PostgreSQL to MySQL Database Service How to import data from Amazon Redshift to MySQL Database Service Today, we will see how we can export some tables from AWS RDS Aurora PostgreSQL and import them on MDS. For this exercise, the data used is pagila, a port of the Sakila example database. We will use S3 to export the data. AWS Aurora PostgreSQL My RDS Aurora PostgreSQL instance is of course running: And the sample data is loaded: Exporting to S3 RDS instances have the possibility to export directly to S3, a bit like RedShift, but it requires some manual steps. The first on is to install an extension, aws_s3, in PostgreSQL: pagila=> CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE; NOTICE: installing required extension "aws_commons" CREATE EXTENSION Then we need to create a S3 bucket in which we will store the data: Permissions We also need to grant permissions to our PostgreSQL instance for writing to S3. To achieve that, we need to create a user and policy. This is maybe to most complicated Now that the user is created, we can continue with the policy: The JSON overview should be similar to this: Finally, we need to create a role that we will assign later to the database instance: We assign the previously created policy: We should end-up with the following role: Let’s assign all this to our Aurora PostgreSQL instance: Export We can now test the export, I don’t use a s3_uri as documented in AWS manual, as I will have to create one for each table (pgpagila is the bucket’s name): pagila=> SELECT * FROM aws_s3.query_export_to_s3('select * from actor','pgpagila', 'actor', 'eu-west-1', options :='format csv'); rows_uploaded | files_uploaded | bytes_uploaded ---------------+----------------+---------------- 200 | 1 | 7999 (1 row) pagila=> SELECT * from aws_s3.query_export_to_s3('select * from address', 'pgpagila','address', 'eu-west-1', options :='format csv'); rows_uploaded | files_uploaded | bytes_uploaded ---------------+----------------+---------------- 603 | 1 | 49798 (1 row) pagila=> SELECT * from aws_s3.query_export_to_s3('select * from category', 'pgpagila','category', 'eu-west-1', options :='format csv'); rows_uploaded | files_uploaded | bytes_uploaded ---------------+----------------+---------------- 16 | 1 | 526 (1 row) Let’s have a look in S3’s bucket: Perfect ! It’s possible to create a loop to export all the tables in a schema like this: DO $$ DECLARE rec record; BEGIN FOR rec IN SELECT tablename FROM pg_tables WHERE schemaname = 'public' ORDER BY tablename LOOP EXECUTE format($sql$ SELECT * from aws_s3.query_export_to_s3('select * from %I', 'pgpagila', '%I' , 'eu-west-1', options :='format csv') $sql$, rec.tablename, rec.tablename) ; END LOOP; END $$; They are now all exported to S3: Import to MDS To import data to MySQL Database Service, the best option is to use a compute instance in the public subnet in OCI. On that compute instance we need to install MySQL Shell and s3fs-fuse. Now we need to mount the S3 bucket on the compute instance, the detailed procedure to install s3fs-fuse and the authentication keys is explained here. [root@mysql-shell-lefred ~]# s3fs -d pgpagila /mnt/s3 \ -o passwd_file=~/.passwd-s3fs -o endpoint=eu-west-1 [root@mysql-shell-lefred ~]# ls -lh /mnt/s3/ total 3.7M -rw-r-----. 1 root root 7.9K Sep 9 20:50 actor -rw-r-----. 1 root root 49K Sep 9 20:50 address -rw-r-----. 1 root root 526 Sep 9 20:50 category -rw-r-----. 1 root root 24K Sep 9 20:50 city -rw-r-----. 1 root root 3.9K Sep 9 20:50 country -rw-r-----. 1 root root 56K Sep 9 20:50 customer -rw-r-----. 1 root root 337K Sep 9 20:50 film -rw-r-----. 1 root root 162K Sep 9 20:50 film_actor -rw-r-----. 1 root root 29K Sep 9 20:50 film_category -rw-r-----. 1 root root 151K Sep 9 20:50 inventory -rw-r-----. 1 root root 276 Sep 9 20:50 language -rw-r-----. 1 root root 818K Sep 9 20:50 payment -rw-r-----. 1 root root 58K Sep 9 20:50 payment_p2020_01 -rw-r-----. 1 root root 117K Sep 9 20:50 payment_p2020_02 -rw-r-----. 1 root root 292K Sep 9 20:50 payment_p2020_03 -rw-r-----. 1 root root 343K Sep 9 20:50 payment_p2020_04 -rw-r-----. 1 root root 9.4K Sep 9 20:50 payment_p2020_05 -rw-r-----. 1 root root 0 Sep 9 20:50 payment_p2020_06 -rw-r-----. 1 root root 1.3M Sep 9 20:50 rental -rw-r-----. 1 root root 269 Sep 9 20:50 staff -rw-r-----. 1 root root 58 Sep 9 20:50 store We now have access to all our exported tables. Data Structure We still need to create the schema (database) in MySQL and create all the tables we want to import. The tables definition can be retrieved using pg_dump -st <table_name> schema. Most of the time the definition must be modified to be compatible with MySQL, this is the work of the DBA to make this mapping. The PosrgreSQL Type Mapping from the manual can help. I’ve also tried to use something like sqlines, online and in command line but the result where not conclusive: For example the rental table can be rewritten like this: -- ORIGINAL - POSTGRESQL CREATE TABLE public.rental ( rental_id integer DEFAULT nextval('public.rental_rental_id_seq'::regclass) NOT NULL, rental_date timestamp with time zone NOT NULL, inventory_id integer NOT NULL, customer_id integer NOT NULL, return_date timestamp with time zone, staff_id integer NOT NULL, last_update timestamp with time zone DEFAULT now() NOT NULL ); -- REWRITTEN - MySQL CREATE TABLE rental ( rental_id integer auto_increment NOT NULL primary key, rental_date timestamp NOT NULL, inventory_id integer NOT NULL, customer_id integer NOT NULL, return_date timestamp , staff_id integer NOT NULL, last_update timestamp DEFAULT now() NOT NULL ); Once you have rewritten the create statements to be compatible with MySQL, it’s time to create the schema and the tables: Let’s try to import the data in this table (MySQL Classic Protocol needs to be used, port 3306): There are several warnings related to the timestamp column. If we verify in the imported table we will see that all records have 0000-00-00 00:00:00: +----------+-------------+--------------+---------------------+ | actor_id | first_name | last_name | last_update | +----------+-------------+--------------+---------------------+ | 1 | PENELOPE | GUINESS | 0000-00-00 00:00:00 | | 2 | NICK | WAHLBERG | 0000-00-00 00:00:00 | | 3 | ED | CHASE | 0000-00-00 00:00:00 | | 4 | JENNIFER | DAVIS | 0000-00-00 00:00:00 | ... This is because the output date in PostgreSQL contains the timezone’s offset: [root@mysql-shell-lefred s3]# head -n 5 actor 1,PENELOPE,GUINESS,2020-02-15 09:34:33+00 2,NICK,WAHLBERG,2020-02-15 09:34:33+00 3,ED,CHASE,2020-02-15 09:34:33+00 4,JENNIFER,DAVIS,2020-02-15 09:34:33+00 5,JOHNNY,LOLLOBRIGIDA,2020-02-15 09:34:33+00 Since 8.0.19, MySQL has the possibility to deal with the timezone offset, but the supported format is +00:00. So we need to prepare the data to match that format too: [root@mysql-shell-lefred s3]# sed -i 's/$/:00/' actor [root@mysql-shell-lefred s3]# head -n1 actor 1,PENELOPE,GUINESS,2020-02-15 09:34:33+00:00 2,NICK,WAHLBERG,2020-02-15 09:34:33+00:00 We can try again: All good ! Conclusion MySQL Shell importTable utility is the best tool to import tables in MySQL Database Service but of course some preparation is required when you migrate from one RDBMS to another one. Enjoy MySQL and MySQL Database Service !  

In previous posts we already saw how to export data from PostgreSQL and AWS RedShift, and import it on MySQL Database Service in OCI using MySQL Shell importTable utility: How to import data from...

Community

Deploy Apache Superset with MySQL Database Service on OCI

We already saw how easy it’s to deploy solutions on OCI using Terraform and Resource Manager’s Stack. I’ve published several resources available on this page. Today we will see how easy it’s to deploy Apache Superset on OCI using MySQL Data Service. Apache Superset is an open source BI, Reporting, Charting tool that competes with Tableau, Looker, etc.  For a list of companies that have deployed Superset, see:  https://github.com/apache/superset/blob/master/RESOURCES/INTHEWILD.md. Superset is loaded with options that make it easy for users of all skill sets to explore and visualize their data, from simple line charts to highly detailed geospatial charts. For a gallery of available charts, see:  https://superset.apache.org/gallery. MySQL can be used as backend to store the needed information related to the platform (users, settings, dashboards, …) , this represents 55 tables. MySQL can also be used as source for the data visualization. Preparation The easiest way to install Apache Superset on OCI is to click this button: You can also use the Terraform modules available on GitHub: https://github.com/lefred/oci-superset-mds. If you use the Red Pill (by clicking on the image above), you will redirected to OCI’s Dashboard Resource Manager Stack: You need to accept the Oracle Terms of Use and then the stack’s configuration will be loaded. Check is you are in the desired compartment and you can click on Next: You are then redirected to the second screen, the variables configuration one. Some variables are mandatory and self explanatory: You also have the possibility to choose HA for MySQL DB instance, to load superset sample data (the deployment is then longer) and the Shape Selection. If you plan to use HeatWave on that instance, I recommend you to directly choose a HeatWave compatible Shape (default): If you already have a VNC and/or a MDS instance you want to use, you can also use the existing OCI infrastructure you have previously deployed: You need the OCIDs of all the existing resources. When you have performed all the selection you need, you can continue the process… Usually default should be good, you only require to add the MDS admin’s password and if this is the first Apache Superset deployment, I also recommend to load the sample data. Deployment The deployment will start, with the sample data load, this takes approximately 30 minutes… When ready, the status will change: At the end of the logs section we already have the output variables we need to use to join our deployment: And we can retrieve that info in the Outputs section too: Apache Superset To reach the Apache Superset we just deployed, we paste the superset_public_ip‘s value on a browser and we use the superset_admin_username and superset_admin_password to connect: Tadaaaam ! Apache Superset is available and working on OCI with MySQL Database Service. If you want to connect to another MDS instance that you would use as data source for some visualization graphs, you need to be able to reach it (usually on the same VCN’s subnet or having routing between different VCN’s) and you must use the following syntax as the MySQL Connector installed is mysql-connector-python: mysql+mysqlconnector://<login>:<password>@<mds IP>:3306/<schema_name> Conclusion Using OCI’s Resource Manager Stack (or Terraform) is very easy to deploy Apache Superset on OCI using MySQL Database Service (MDS) as backend. In few minutes you have an Open Source Data Visualization solution that you can use with your MySQL Database Service instances. Enjoy OCI, MySQL and MySQL Database Service !

We already saw how easy it’s to deploy solutions on OCI using Terraform and Resource Manager’s Stack. I’ve published several resources available on this page. Today we will see how easy it’s to deploy A...

Oracle MySQL Scores Big Race Victory at Silverstone

GT Cup 2021 For racing, as for the cloud, performance matters; and a few second faster will make a difference between win and lose.   Team Greystone GT and Oracle MySQL Database Service with HeatWave score maiden race win with Mason and Lancaster in GT Cup Greystone GT became race winners for the first time as Rich Mason and Jon Lancaster scored an extraordinary GT Cup victory on a chaotic weekend at Silverstone. It was an inspired strategy call that put the #22 McLaren 570S GT4 into prime position in the GTH category - the most hotly-contested class in GT Cup - in Saturday afternoon's Endurance Race. An on-track incident brought out the safety car close to half-distance, and I was still on-track when the mandatory pit window - in which all cars must spend at least 70 seconds stationary and driver changes are allowed - opened on the 28-minute mark. While most of the field had passed the pit entry for the 11th time when the window opened, several cars running lower down - such as Rich's McLaren - were able to come in, swap drivers and rejoin the action while their rivals an at slow speed. When green-flag racing resumed next time around and the majority of the field pitted, twas Leeds racer Jon - now having relieved Harrogate-based Rich - who led. The former European Le Mans Series champion clung on to that advantage too, winning by 0.032 seconds for a memorable first success for Greystone GT. It was a magnificent result for Rich, who was competing in only his third racing event, and for Siverstone-based Greystone GT with MySQL Database Service and HeatWave, whose race team was established less than a year ago and has taken on the challenge of GTH with three rookie competitors. Additional reference: GreystoneGT page

GT Cup 2021 For racing, as for the cloud, performance matters; and a few second faster will make a difference between win and lose.   Team Greystone GT and Oracle MySQL Database Service with HeatWave...

Community

Point-in-Time Recovery in OCI MDS with Object Storage – part 2

In part 1 of the series about Point-in-Time Recovery in OCI MDS, we saw how to stream the binary log to Object Storage. In this blog post, we will see how we can restore the data up to certain point. The high level process is simple: we need to restore the last backup just before the point we want to return to find the last GTID that was executed in that backup (not 100% mandatory but can accelerate the process) choose what is the objective: just skip one transaction (following transaction could depend on it) recover up to that transaction and skip all the following ones replay the binary logs taking in consideration the chosen option of point 3 The scenario We have an application that constantly write data (sysbench) and at the same time, we have another table with our most valuable users (a very important table !!). This is the content of our table: SQL > select * from vip; +----+--------+---------------------+ | id | name | inserted | +----+--------+---------------------+ | 1 | kenny | 2021-08-26 11:46:36 | | 2 | airton | 2021-08-26 11:46:36 | | 3 | dave | 2021-08-26 11:46:36 | | 4 | miguel | 2021-08-26 11:46:36 | | 5 | luis | 2021-08-26 11:46:36 | | 6 | mark | 2021-08-26 13:33:12 | +----+--------+---------------------+ Then, one of the operator entered the following statements: SQL > insert into vip (name) values ('mr mike'); SQL > insert into vip (name) values ('lenka'); SQL > insert into vip (name) values ('dimo'); The operator realized he made a mistake… and wanted to update that last record: SQL > update vip set name = "dim0"; Query OK, 9 rows affected (0.0043 sec) The operator didn’t realize his mistake, and added another record: SQL > insert into vip (name) values ('lefred'); The day after, another operator got some complains… and when he checked the table: SQL > select * from vip; +----+--------+---------------------+ | id | name | inserted | +----+--------+---------------------+ | 1 | dim0 | 2021-08-26 11:46:36 | | 2 | dim0 | 2021-08-26 11:46:36 | | 3 | dim0 | 2021-08-26 11:46:36 | | 4 | dim0 | 2021-08-26 11:46:36 | | 5 | dim0 | 2021-08-26 11:46:36 | | 6 | dim0 | 2021-08-26 13:33:12 | | 7 | dim0 | 2021-08-26 21:01:23 | | 8 | dim0 | 2021-08-26 21:01:42 | | 9 | dim0 | 2021-08-26 21:02:34 | | 10 | lefred | 2021-08-27 14:48:13 | +----+--------+---------------------+ 10 rows in set (0.0009 sec) Oups… Who were the other VIPs ? Of course meanwhile many other writes happened to the database that we don’t want to loose ! So the first thing is to stop all the applications using this table (the vip one). Others can still run while we find the transaction we need to avoid… Finding the transaction This is of course the tricky part… we need to find the ID of the transaction that made this error. We need to try to get as much as info as possible to find it out. In our example, we know that somebody changed all the value of the name column of table vip to ‘dim0’. It also seems that 9 rows where modified. And this seems to have happened yesterday. Before parsing all the binary logs using mysqlbinlog, we can already try to identify a potential one having the ‘dim0’ word in it. So on the Object Storage mount point (see part 1), we do a simple grep: [root@mysql-shell-lefred my-mds]# grep dim0 * Binary file binary-log.000140 matches Great, we have a candidate, let’s verify: [root@mysql-shell-lefred my-mds]# mysqlbinlog -v \ --base64-output=DECODE-ROWS \ binary-log.000140 | less We have identified the binary log file and the GTID (3fecfeaf-0651-11ec-a7e9-0200170484c8:537264). We can move forward and restore the last backup we have before 2021-08-26 21:03:12 GMT. Backup Restore After we have identified the backup we can restore it: When the restore is done, it’s the finally the time to put all the applications on maintenance (unless you only want to recover the needed data, dump it and re-import it on production). On the current production, after having put the applications on maintenance (or simply stopped), we can check the GTID set: SQL > select @@gtid_executed; +-----------------------------------------------+ | @@gtid_executed | +-----------------------------------------------+ | 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-605971 | +-----------------------------------------------+ 1 row in set (0.0007 sec) Point-in-Time Recovery We need to connect to our new restored instance from our compute instance using MySQL Shell: We can verify the value of the executed GTIDs and the content of the table we want to get back the initial data at restore time: SQL > select @@gtid_executed; +-----------------------------------------------+ | @@gtid_executed | +-----------------------------------------------+ | 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-527817 | +-----------------------------------------------+ 1 row in set (0.0003 sec) SQL > select * from mydb.vip; +----+--------+---------------------+ | id | name | inserted | +----+--------+---------------------+ | 1 | kenny | 2021-08-26 11:46:36 | | 2 | airton | 2021-08-26 11:46:36 | | 3 | dave | 2021-08-26 11:46:36 | | 4 | miguel | 2021-08-26 11:46:36 | | 5 | luis | 2021-08-26 11:46:36 | | 6 | mark | 2021-08-26 13:33:12 | +----+--------+---------------------+ 6 rows in set (0.0044 sec) We can finally see some of the VIPs… but we are missing some of course… that all the concept of point-in-time recovery ! Now the important is also to to answer the question in point 3: choose what is the objective: just skip one transaction (following transaction could depend on it) recover up to that transaction and skip all the following ones In case of choice number 1, we need to add the GTID of the transaction we want to skip and then replay all the binary logs from one having the last GTID in gtid_executed on the restore (in our example: 3fecfeaf-0651-11ec-a7e9-0200170484c8:527817). In case of option 2, we also need to replay all the binary logs from one having the last GTID in gtid_executed on the restore and stop just before the GTID we want to skip. For both options we need to start feeding the MySQL instance with events from the same binary log, the one having the last committed transaction. On the restored system, we need to get the first binary log file: SQL > show binary logs; +-------------------+-----------+-----------+ | Log_name | File_size | Encrypted | +-------------------+-----------+-----------+ | binary-log.000138 | 1937572 | No | | binary-log.000139 | 219 | No | | binary-log.000140 | 244 | No | | binary-log.000141 | 244 | No | +-------------------+-----------+-----------+ And we verify is the first one contains the last transaction as expected: [root@mysql-shell-lefred my-mds]# mysqlbinlog -v \ --base64-output=DECODE-ROWS binary-log.000138 \ | grep '3fecfeaf-0651-11ec-a7e9-0200170484c8:527817' SET @@SESSION.GTID_NEXT= '3fecfeaf-0651-11ec-a7e9-0200170484c8:527817'/*!*/; Perfect, we know we need to start from binary-log.000138. Option 1: just skip one transaction Now we will tell MySQL to ignore the transaction we want to skip (3fecfeaf-0651-11ec-a7e9-0200170484c8:537264). To do so we first need to retrieve the value of gtid_purged and add the one we want to avoid: SQL > select @@gtid_purged; +-----------------------------------------------+ | @@gtid_purged | +-----------------------------------------------+ | 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-525206 | +-----------------------------------------------+ 1 row in set (0.0005 sec) SQL > call sys.set_gtid_purged("3fecfeaf-0651-11ec-a7e9-0200170484c8:1-525206:537264") And it’s time so send all binary logs to our instance (the new restore one): [root@mysql-shell-lefred my-mds]# mysqlbinlog --require-row-format \ binary-log.000138 binary-log.000139 binary-log.0001[4-9]* \ binary-log.000[2-3]* | mysql -u admin -h 10.0.1.118 -p We started with binary-log.000138 and the last one was binary-log.000384. It’s recommended to load them all at once and not one by one as described in the manual. If you don’t use –require-row-format in MDS you will get the following error: ERROR 1227 (42000) at line 28: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation This is because the binary log contains the following statements that is not allowed: SET @@session.pseudo_thread_id=75/*!*/; When the operation is finished, we are done ! We can verify: SQL > select * from vip; +----+---------+---------------------+ | id | name | inserted | +----+---------+---------------------+ | 1 | kenny | 2021-08-26 11:46:36 | | 2 | airton | 2021-08-26 11:46:36 | | 3 | dave | 2021-08-26 11:46:36 | | 4 | miguel | 2021-08-26 11:46:36 | | 5 | luis | 2021-08-26 11:46:36 | | 6 | mark | 2021-08-26 13:33:12 | | 7 | mr mike | 2021-08-26 21:01:23 | | 8 | lenka | 2021-08-26 21:01:42 | | 9 | dimo | 2021-08-26 21:02:34 | | 10 | lefred | 2021-08-27 14:48:13 | +----+---------+---------------------+ 10 rows in set (0.0006 sec) We did an excellent job ! We can also compare the GTID executed on both servers: my-mds > SQL > select @@gtid_executed, @@gtid_purged\G *************************** 1. row *************************** @@gtid_executed: 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-605971 @@gtid_purged: 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-605971 my-mds-restore > SQL > select @@gtid_executed, @@gtid_purged\G *************************** 1. row *************************** @@gtid_executed: 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-605971 @@gtid_purged: 3fecfeaf-0651-11ec-a7e9-0200170484c8:1-527817:537264 Option 2: recover to that point In case we wanted to restore the data exactly before the wrong statement and stop at that exact point, there is no need to change the value of gtid_purged and mysqlbinlog command would have be: [root@mysql-shell-lefred my-mds]# mysqlbinlog --require-row-format \ binary-log.000138 binary-log.000139 binary-log.000140 \ --include-gtids='3fecfeaf-0651-11ec-a7e9-0200170484c8:527817-537263' \ binary-log.000[2-3]* | mysql -u admin -h 10.0.1.118 -p Where we include all the binary logs we need (from 000138 to 000140) and the GTID set from the start to the point we need. Back in Production Now we can point our applications to the new instance and restart them. We also need to change the binary log streaming by creating a new configuration and starting a new service. Finally we can terminate and delete the previous production server. Conclusion You now have the step-by-step guide for Point-in-Time Recovery (PITR) … and as usual, enjoy MySQL and MySQL Database Service !

In part 1 of the series about Point-in-Time Recovery in OCI MDS, we saw how to stream the binary log to Object Storage. In this blog post, we will see how we can restore the data up to certain point. The...

Community

Point-in-Time Recovery in OCI MDS with Object Storage – part 1

To setup point-in-time recovery for MDS using Object Storage, these are the prerequisites: a MDS instance running a backup plan (default) a compute instance an Object Storage Bucket In this article, I won’t focus on how to create a MDS instance, a compute instance, enable backups and Object Storage Bucket, this is easy and there is already a lot of literature about them (you can also check several of my previous articles). This is part 1 of the Point-in-Time Recovery in OCI MDS series. It’s only about streaming the binary logs to Object Store. In part 2, we will see how to use them and perform PITR. Saving the binary logs In fact most of the work will be to setup the compute instance to store the binary logs coming from MDS (one single compute instance could stream those binlogs from multiple MDS source). On the compute instance we need some extra packages: mysql-client (we need mysqlbinlog and mysql_config_editor) mysql-shell (because I like to use it) s3fs-fuse The first two packages are available in the MySQL Repositories and the last one needs EPEL on rpm based distributions. We need to create an ACCESS_KEY_ID and a SECRET_ACCESS_KEY to access our Object Storage Bucket: We copy these keys on one single line separated with colon ':' in a file, for example ~/.passwd-ocifs. And we mount it like this: # chmod 600 ~/.passwd-ocifs # mkdir /mnt/oci # s3fs lefred-bucket /mnt/oci -o endpoint=us-ashburn-1 \ > -o passwd_file=~/.passwd-ocifs \ > -o url=https://ixxxxxxxxxx.compat.objectstorage.us-ashburn-1.oraclecloud.com/ \ > -onomultipart -o use_path_request_style The mount point to Object Storage Bucket is ready and we will be able to stream the binary logs to /mnt/oci. A dedicated MySQL User We will now create a dedicated MySQL User that will fetch the binary logs from the MySQL Instance: We connect to our MDS instance (I use MySQL Shell) and we do: SQL> CREATE USER 'binlog_to_object_storage'@'10.0.0.%' IDENTIFIED BY 'C0mpl1c4t3d!Passw0rd' REQUIRE SSL; SQL> GRANT REPLICATION SLAVE ON *.* TO 'binlog_to_object_storage'@'10.0.0.%'; I used a range ip for the host part of the credential account but you could have used the private IP of the compute instance. Don’t mess up with credentials To connect to MDS with the new credentials, we could specify the user, host and password in the command line… but it’s not always safe and also it can behave strangely when we use special characters and we want to script all that (it’s the case here as I use an exclamation mark, !, in the password). The best solution that MySQL provides us, is to use mysql_config_editor that works with all MySQL clients (mysqlbinlog too of course, we will see in the next chapter why we need it). So now we need to create a config path using mysql_config_editor. As my MDS instance is called my-mds, I will use the same name but you can use what suits you the best: [root@mysql-shell ~]# mysql_config_editor set --login-path=my-mds \ --host=10.0.1.128 --user=binlog_to_object_storage --password Enter password: 10.0.1.128 is the IP of my MDS instance. We can test the connection using the old mysql client for example: [root@mysql-shell ~]# mysql --login-path=my-mds Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 87 Server version: 8.0.26-u1-cloud MySQL Enterprise - Cloud This is perfect, we can move forward. mysqlbinlog As you may know, mysqlbinlog has the possibility of reading the binary logs from a live server and just store them to disk using the options --raw --read-from-remote-server. This is what we gonna use. I’ve created a wrapper slightly inspirited by the one of my old friend Tamas. We need to save it on the compute instance in /root/bin/binlog_to_object_storage.sh the opc user is not allowed to exec fusermount, we need to use root The file is here: binlog_to_object_storage.sh (for MDS HA, use this script that includes the hostname in the saved binary log's file name: binlog_to_object_storage.sh) We also need a configuration file. As I wrote before, the same sever could be used to stream multiple different MDS instances. Let’s create the configuration file in /root/conf and call it my-mds.conf: MYSQL_LOGIN_PATH=my-mds OBJECT_STORAGE_MOUNT=/mnt/oci BINLOGS_DIR=my-mds We only need this three variables what will define the credentials to use and where to store the data. I also created a systemd service script to start the process, same script can be also used for multiple streaming processes. So we create a file called /etc/systemd/system/[email protected]: [Unit] Description=Streaming MDS binary logs to Object Storage using %i After=network.target [Service] Type=simple User=root Restart=on-failure ExecStart=/root/bin/binlog_to_object_storage.sh /root/conf/%i.conf [Install] WantedBy=multi-user.target We just need to reload the systemclt daemon and start our new service: As you can see, we provide the configuration name (my-mds) to the service: [email protected] And we can see in Object Storage that the binary logs are now stored in it even after being purged in the MySQL Database Service instance: Conclusion It’s very easy to stream the binary logs of a MySQL Database Service instance to Object Storage in OCI. MySQL provides all the needed tools. In the second part of the series, we will see how we can use those binary logs to perform Point-in-Time Recovery. Enjoy MySQL and MySQL Database Service !

To setup point-in-time recovery for MDS using Object Storage, these are the prerequisites: a MDS instance running a backup plan (default) a compute instance an Object Storage Bucket In this article, I...

Community

Setup Disaster Recovery for OCI MySQL Database Service

When you create a MySQL Database Service instance in OCI, you have the choice between 3 types: If you have minutes as RTO (Recovery Time Objective) in case of a failure, you must choose a High Availability instance that will deploy a Group Replication Cluster over 3 Availability Domains or 3 Fault Domains. See Business Continuity in OCI Documentation. These are the two options: Natural disasters happen – fires, floods, hurricanes, typhoon, earthquakes, lightning, explosion, volcanos, prolonged shortage of energy supplies or even acts of governments happen which could impede things. Having a DR copy of the data can be important. And, of course you will need to consider legal aspects as well. Case in point , compliance to GDPR would apply where you define your main data center for your data but also your DR data center as well. My example does DR without consideration of data compliance limitations. Given large number of choices you can select from a solution should exist that satisfies DR and compliance. See – https://www.oracle.com/cloud/data-regions/ The deployed DR instance can also be used for some read/only traffic. This can be useful for analytics or to have a read/only report server in different regions too. Architecture Let’s see how we can deploy such architecture: I won’t cover the creation of the MySQL HA instance as it’s straightforward and it has been already covered. But please pay attention that by default the binary logs are kept only one hour on a MDS instance: SQL > select @@binlog_expire_logs_seconds; +------------------------------+ | @@binlog_expire_logs_seconds | +------------------------------+ | 3600 | +------------------------------+ In case you know that this time will be too short, before provisioning your HA instance, you can create a custom configuration where the value of binlog_expire_logs_seconds is higher. But this will also consume more disk space on your instances. Back to the architecture, we have a different VCN in each region. To create the new standalone instance that will be used as DR (in Frankfurt), we need: verify that we have a DRG (Dynamic Routing Gateway) to peer the different regions create a dedicated user for replication on the current HA instance dump the data to Object Storage using MySQL Shell create the new instance in another region using the Object Storage Bucket as initial data create the replication channel Peering the Regions Peering different region is not trivial but it’s maybe the most complicate part of this architecture as it required some extra knowledge that is not focused on MySQL only. The best is to follow the manual about Peering VCNs in different regions through a DRG. I will try to summarize it here too. We start by creating a Dynamic Routing Gatway on both region (Ashburn and Frankfurt): We create them, I called them DGR_gerrmany and DGR_usa. This is how they are represented: When both DGRs are created, I first attach them to their VCN: And I do the same for the VCN ins USA. Now we can go back in the DRG page and create the Remote Peering Connection (RPC): Then we decide which side will initiate the connection. I decided to establish it from USA, so I need the RPC_to_usa OCID: And on the other side we use it to establish the connection: After a little while, the RPC will become Peered: Finally, we need to create the entry in the Route Tables for both VCNs. We need to add the rule to join the other network in the default and private-subnet routing table like this: And we need to use the other range in Germany to route to USA (10.0.0.0/16). Don’t forget to add those rules in the private subnet routing table too: This part is now finished, we can go back to MySQL… Dedicated Replication User On the MySQL HA instance, we create the user we will use for the replication channel: SQL> CREATE USER 'repl_frankfurt'@'10.1.1.%' IDENTIFIED BY 'C0mpl1c4t3d!Passw0rd' REQUIRE SSL; SQL> GRANT REPLICATION SLAVE ON *.* TO 'repl_frankfurt'@'10.1.1.%'; If we plan to have multiple DR sites, I recommend to use a dedicated user per replica. Pay attention to the host part that needs to match the Private Subnet range of the other region. Dumping the Data Now we need to dump the data directly into a Object Storage Bucket. We first create a bucket on OCI’s Dashboard in the destination/target region. In this case Frankfurt: On the compute instance where we have installed MySQL Shell, we also need an oci config file. We can create it from OCI Dashboard for our user (Identity -> User -> User Details): We need to download the keys if we choose to generate them and copy the content of the config in ~/.oci/config. We need to set the private key’s location and filename: As I want to dump into an Object Storage Bucket that is located in Germany, I will also have to change the region in ~/.oci/config to point to eu-frankfurt-1. It’s time to use MySQL Shell, bigger is your data, bigger should be the compute instance used for MySQL Shell, more CPU power means more parallel threads too ! It’s mandatory to use the option ociParManifest to create a dump that can be used as initial data import. The logical dump will expire (won’t be usable anymore) as soon as the latest+1 GTID event present in the dump will be in a binary log that has been purged from the HA instance. We can see that the dump wrote in our Object Storage Bucket: Deploy the DR Instance No we deploy a new standalone instance in the another region as usual but we specify which data to load directly after provisioning of the instance: And now, a new MySQL Database Instance will be created… Connection to the new MySQL Instance Now we need to verify that the Private Subnets of each VCNs accept connections to MySQL Classic (3306) and X (33060) Protocol. In their Security List for the Private subnet we need to find the following rules: Let’s try to connect with the Compute instance in USA (Ashburn) to the new instance in Frankfurt using MySQL Shell: We can verify that the data has been already imported: SQL > show databases; +--------------------+ | Database | +--------------------+ | bikestores | | dvdrental | | information_schema | | mysql | | performance_schema | | sbtest | | sys | | tickitdb | +--------------------+ Perfect ! Now we can retrieve some GTID information: SQL > select @@gtid_executed, @@gtid_purged\G *************************** 1. row *************************** @@gtid_executed: 96a2ea9c-9caf-425d-add1-8663411690d1:1-23308 @@gtid_purged: 96a2ea9c-9caf-425d-add1-8663411690d1:1-23308 1 row in set (0.0973 sec) And we can compare from the data stored in the file @.json in Object Storage: [...] "serverVersion": "8.0.26-u1-cloud", "binlogFile": "binary-log.000963", "binlogPosition": 6549228, "gtidExecuted": "96a2ea9c-9caf-425d-add1-8663411690d1:1-23308", "gtidExecutedInconsistent": false, "consistent": true, "compatibilityOptions": [], "begin": "2021-08-25 18:37:22" } This matches, it’s all good, we can now create the Replication Channel. Replication Channel On the new MySQL Database Instance, we use Channels underResources: We then create a new channel: After adding all the information, if everything is valid, we will see the active channel: And we can also verify this on the Replica: SQL > show replica status\G *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: 10.0.1.212 Source_User: repl_frankfurt Source_Port: 3306 Connect_Retry: 60 Source_Log_File: binary-log.000973 Read_Source_Log_Pos: 826 Relay_Log_File: relay-log-replication_channel.000002 Relay_Log_Pos: 421 Relay_Source_Log_File: binary-log.000973 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 826 Relay_Log_Space: 644 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: Yes Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 0 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 1511248356 Source_UUID: 53025e27-0334-11ec-9eec-02001704d2b8 Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates Source_Retry_Count: 0 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 96a2ea9c-9caf-425d-add1-8663411690d1:1-29224 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: replication_channel Source_TLS_Version: TLSv1.2,TLSv1.3 Source_public_key_path: Get_Source_public_key: 1 Network_Namespace: mysql 1 row in set (0.0955 sec) Conclusion As you can see many steps are very nicely integrated in MDS, like the replication channel creation, the initial data import at creation time, and more… Of course some networking knowledge (gateway, routing, firewall) is also required to join multiple regions. And as usual, MySQL Shell does the job too ! Enjoy MySQL and MySQL Database Service !

When you create a MySQL Database Service instance in OCI, you have the choice between 3 types: If you have minutes as RTO (Recovery Time Objective) in case of a failure, you must choose a High...

Community

How to import data from Microsoft SQL Server to MySQL Database Service

After having see how we can import data from PostgreSQL and Amazon Redshift, this time we will see how we can export data from Microsoft SQL Server and import it into MySQL Database Service in OCI. This time we will use something extra (for fun but also because it’s practical): OCI Object Storage ! The process will be to export the data directly to OCI Object Storage from the MS SQL Server and then import it to MySQL Database Service using MySQL Shell importTable() utility reading directly from the Object Storage Bucket. For this exercise, we will use the BikeStores sample database from this tutorial. Tables Definition The first task is to get the table definitions of the tables we want to export to MDS. Let’s first get the list of tables: 1> :setvar SQLCMDMAXVARTYPEWIDTH 30 2> :setvar SQLCMDMAXFIXEDTYPEWIDTH 30 3> go 1> select TABLE_SCHEMA, table_name from information_schema.tables 2> go TABLE_SCHEMA table_name ------------------------------ ------------------------------ production categories production brands production products sales customers sales stores sales staffs sales orders sales order_items production stocks (9 rows affected) One big difference between SQL Server and MySQL is that in SQL Server there is a notion of database and table_schemas. In MySQL databases and table_schemas are synonyms. As all table names are unique, we will just ignore the table_schema names in MySQL and only use the database’s name: BikeStores. It’s easy to get the table definition using SSMS (SQL Server Management Studio), but it’s only available on Windows. We have then two remaining options for Linux users like me: use Azure Data Studio use the command line sqlcmd Azure Data Studio From Azure Data Studio, you can get the table definition using Script as Create: And then we get the selected table’s creation statement: As in the previous post mentioned at the beginning of this article, some minor changes will be required for MySQL. Using sqlcmd For those not willing to use any GUI, it’s also possible to get the table’s definition using the command line. Unfortunately, in SQL Server, SHOW CREATE TABLE does not exist. We will use a store procedure to get the info we are looking for: sp_GetDDL. So we download it: # wget https://www.stormrage.com/SQLStuff/sp_GetDDL_Latest.txt I added the following 2 lines at the top of the downloaded file before loading it to SQL Server: SET QUOTED_IDENTIFIER ON GO To load it, this is the command: # sqlcmd -S localhost -U SA -P 'Passw0rd!' -i sp_GetDDL_Latest.txt We can now connect interactively to SQL Server like this: # sqlcmd -S localhost -U SA -P 'Passw0rd!' -d BikeStores 1> And we call the new procedure using the schema_name and table_name of the tables we want to have in MySQL Database Service: 1> :setvar SQLCMDMAXVARTYPEWIDTH 1024 2> exec sp_GetDDL 'production.categories' 3> go The command will return something similar to this: IF OBJECT_ID('[production].[categories]') IS NOT NULL DROP TABLE [production].[categories] GO CREATE TABLE [production].[categories] ( [category_id] INT IDENTITY(1,1) NOT NULL, [category_name] VARCHAR(255) NOT NULL, CONSTRAINT [PK__categori__D54EE9B454313162] PRIMARY KEY CLUSTERED ([category_id] asc) ) For MySQL we rewrite the create statement like this: CREATE TABLE categories ( category_id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, category_name VARCHAR(255) NOT NULL ) And we do the same for all tables we want to import to MDS. You can use the Microsoft SQL Server Type Mapping to find the more suitable MySQL data type. Mounting Object Storage We will use s3fs-fuse to mount OCI Object Storage Bucket, as explained in this article, on our SQL Server and dump the tables in it. We use EPEL to install the required package: $ sudo yum install -y s3fs-fuse We create a bucket on OCI’s Dashboard: We need to create an ACCESS_KEY_ID and a SECRET_ACCESS_KEY: We copy these keys on one single line separated with colon ‘:’ in a file, for example ~/.passwd-ocifs. And we mount it like this: # chmod 600 ~/.passwd-ocifs # mkdir /mnt/ocifs # s3fs lefred-bucket /mnt/ocifs -o endpoint=us-ashburn-1 \ > -o passwd_file=~/.passwd-ocifs \ > -o url=https://ixxxxxxxxxx.compat.objectstorage.us-ashburn-1.oraclecloud.com/ \ > -onomultipart -o use_path_request_style Now we can write data directly to our Object Storage Bucket using /mnt/ocifs. Exporting Data Everything is ready to export the content of the tables into CSV files: # sqlcmd -S localhost -U SA -P 'Passw0rd!' -d BikeStores \ > -Q "set nocount on; select * from production.categories" \ > -o /mnt/ocifs/categories.csv -h-1 -s"," -w 700 -W # ls -lh /mnt/ocifs/ total 512 -rw-r--r--. 1 root root 147 Aug 24 21:28 categories.csv We can directly see it in OCI’s Dashboard too: We do the exact same process for all the tables we want to import to MySQL Database Service. You can of course also use the GUI to export to CSV and import those CVS using MySQL Shell directly without using Object Storage: Importing Data As usual, we will use MySQL Shell to import in MDS the data that has been generated from MS SQL Server. We start by creating the database and the tables if this is not yet done: Don’t forget that if we need an oci config file on the compute instance. You can create it from OCI Dashboard for your user (Identity -> User -> User Details): You need to download the keys if you generates them and then copy the content of the config in ~/.oci/config and set the private key's location and filename: After that, you are ready to import each tables using MySQL Shell: We can see that the data is now present in MySQL: You repeat the same operation for each tables you want to load into MySQL Database Service. In case you have not used sqlcmd and Object Storage, but you preferred the use of the GUI to generate the CSV files, you can import them like this: Conclusion Once again, the best solution to load data to MySQL Database Service is MySQL Shell. Enjoy MySQL and MySQL Database Service!

After having see how we can import data from PostgreSQL and Amazon Redshift, this time we will see how we can export data from Microsoft SQL Server and import it into MySQL Database Service in OCI. Thi...

Community

How to import data from Amazon Redshift to MySQL Database Service

We saw in this previous post how to import data from PostgreSQL to MySQL Database Service. Using almost the same technique, we will now import data from Amazon Redshift and import it to a MDS instance. With Redshift we have two options to export the data to CSV files that can be imported to MDS: using Redshift’s UNLOAD using PostgreSQL’s output to local files For this article we use the sample database that can be loaded into Redshift at the instance’s creation: tickit: Redshift Unload The first method we use is the recommended one in Redshift. However this method requires also a S3 bucket as UNLOAD only works with S3. I’ve created a S3 bucket (redshiftdump), with an access point called mys3. I also need to create a IAM Role (redshift_to_s3) that allows Read & Write access to my S3 bucket and finally I assigned that role also to the Redshift Cluster: So now we can UNLOAD all the data using Query Editor:   And we repeat the query for all tables we want to export. Data Structure Redshift is a fork of PostgreSQL and therefore, we will need to modify a little bit the tables definition to import the data to MySQL. The definition of the tables in ticktit schema are available directly in the documentation: create table category( catid smallint not null distkey sortkey, catgroup varchar(10), catname varchar(10), catdesc varchar(50)); This create statement must be modified, we will only change the primary key, so the statement becomes: create table category( catid smallint not null auto_increment primary key, catgroup varchar(10), catname varchar(10), catdesc varchar(50)); And we do the same for all tables we want to load into MySQL. It’s recommended to change the integer as primary key to integer unsigned as we won’t use negative values. So for the table users, this field: userid integer not null distkey sortkey, becomes userid integer unsigned not null primary key, Using S3 with our MySQL Shell compute instance The data is on S3 but we need to import it on MDS. Usually to import data to a MySQL instance, we use MySQL Shell with importTable() utility. The recommended way to use this utility with MDS is to use a dedicated compute instance. For large dataset, a large compute instance is recommended to benefit from multiple parallel threads. It’s also possible to mount S3 as a filesystem on Linux using s3fs-fuse available in EPEL. $ sudo yum install -y s3fs-fuse We need first to create an access key. When you have your key, you need the Access Key ID and the Secret Access Key that you add in a file (you concatenate them with a colon ‘:‘‘), I called the file .paswd-s3fs: Protect your file: chmod 600 ~/.passwd-s3fs In S3, we need to create an access point for our bucket: We use it now to mount our bucket and check if the files we have exported from Redshift are available: Importing the Data It’s time now to import the data using MySQL Shell, we start with the table category: You can notice that fields are separated by ‘|’ when using UNLOAD. And we can do exactly the same for all tables. Now if don’t have S3 or we don’t want to mount it on OCI, we have the second option. Export to local CSV It’s also possible to connect to Redshift as a traditional PostgreSQL database as explained on this article. As soon as you have everything ready in AWS (Internet Gateway, Security Group with Inbound Rule allowing Redshift traffic and a Routing Table routing 0.0.0.0/0 to the Internet Gateway), you can connect to Amazon Redshift even from our MySQL Shell Compute instance. We install PostgreSQL client package: sudo yum install -y postgresql12 And we can connect to our Redshift cluster and export the data to CSV files like this: And now you can also import from MySQL Shell the data to the MDS instance the same way using importTable() utility. Conclusion Using an OCI Compute Instance with MySQL Shell is the best option to load data from PostgreSQL, Amazon Redshift, and other RDBMS directly to MySQL Database Service. The import process is straightforward, the export process is a bit more trivial. Enjoy MySQL and MDS !

We saw in this previous posthow to import data from PostgreSQL to MySQL Database Service. Using almost the same technique, we will now import data from Amazon Redshift and import it to a MDS...

Community

How to import data from PostgreSQL to MySQL Database Service

MySQL Database Service (aka MDS) is very popular and many users wants to benefit from the managed MySQL service and from MySQL HeatWave. We see many users migrating their data from MySQL on-premise or from another cloud vendor to MDS… we also see people migrating from other data-stores like PostgreSQL. In this article we will see how to migrate data from PostgreSQL to MDS using CVS files and MySQL Shell Utility. When we want to copy data from one database to another one, we need to split the data structure and the actual data. For this example, we will use PosgreSQL sample database called DVDrental. Tables Definition The first task consists in getting the table definition out of PostgreSQL. The most popular option is to use pg_dump, but let’s have a look at it when using the first table (actor): -bash-4.2$ pg_dump -st actor dvdrental -- -- PostgreSQL database dump -- -- Dumped from database version 12.8 -- Dumped by pg_dump version 12.8 SET statement_timeout = 0; SET lock_timeout = 0; SET idle_in_transaction_session_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = on; SELECT pg_catalog.set_config('search_path', '', false); SET check_function_bodies = false; SET xmloption = content; SET client_min_messages = warning; SET row_security = off; SET default_tablespace = ''; SET default_table_access_method = heap; -- -- Name: actor; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE public.actor ( actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL, first_name character varying(45) NOT NULL, last_name character varying(45) NOT NULL, last_update timestamp without time zone DEFAULT now() NOT NULL ); ALTER TABLE public.actor OWNER TO postgres; -- -- Name: actor actor_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY public.actor ADD CONSTRAINT actor_pkey PRIMARY KEY (actor_id); -- -- Name: idx_actor_last_name; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX idx_actor_last_name ON public.actor USING btree (last_name); -- -- Name: actor last_updated; Type: TRIGGER; Schema: public; Owner: postgres -- CREATE TRIGGER last_updated BEFORE UPDATE ON public.actor FOR EACH ROW EXECUTE FUNCTION public.last_updated(); -- -- PostgreSQL database dump complete -- From that output, to import the data only the text in orange is relevant to us (eventually the indexes can also be created later). If you are familiar with MySQL, you can directly identify that the SQL syntax returned won’t work in MySQL. Let’s have a look at the table’s deinition: CREATE TABLE public.actor ( actor_id integer DEFAULT nextval('public.actor_actor_id_seq'::regclass) NOT NULL, first_name character varying(45) NOT NULL, last_name character varying(45) NOT NULL, last_update timestamp without time zone DEFAULT now() NOT NULL ); We can see that: there is a public. prefix the primary key is not defined here (but it’s defined later as a constraint) a user defined type is used: public.actor_id_seq, this is sequence character varying is actually VARCHAR the timestamp field uses some extra information not known in MySQL: without time zone That statement must be rewritten to work in MySQL, this is a manual step: CREATE TABLE actor ( actor_id integer auto_increment NOT NULL primary key, first_name varchar(45) NOT NULL, last_name varchar(45) NOT NULL, last_update timestamp DEFAULT now() NOT NULL ); You can also use the PosrgreSQL Type Mapping from the manual. We can enter that new rewritten statement in our MySQL Database Service instance and add the secondary indexes: We need to perform such exercise for all tables present in the dvdrental schema: dvdrental-# \dt List of relations Schema | Name | Type | Owner --------+---------------+-------+---------- public | actor | table | postgres public | address | table | postgres public | category | table | postgres public | city | table | postgres public | country | table | postgres public | customer | table | postgres public | film | table | postgres public | film_actor | table | postgres public | film_category | table | postgres public | inventory | table | postgres public | language | table | postgres public | payment | table | postgres public | rental | table | postgres public | staff | table | postgres public | store | table | postgres (15 rows) This is the most complex part when multiple USER-DEFINED data types are used like in the table film: rating public.mpaa_rating DEFAULT 'G'::public.mpaa_rating, What is that mpaa_rating ? You can retrieve it’s definition like this: dvdrental=# select enum_range(null::mpaa_rating); enum_range ---------------------- {G,PG,PG-13,R,NC-17} (1 row) We can see that this is an ENUM and we can then rename that column like this: rating ENUM('G','PG','PG-13','R','NC-17') DEFAULT 'G', Time for the Data Now it’s time to export the data from PostgreSQL to CVS: dvdrental=# \copy actor to '/vagrant/output/actor.csv' delimiter ',' CSV HEADER; COPY 200 dvdrental=# \copy address to '/vagrant/output/address.csv' delimiter ',' CSV HEADER; COPY 603 dvdrental=# \copy category to '/vagrant/output/category.csv' delimiter ',' CSV HEADER; COPY 16 dvdrental=# \copy customer to '/vagrant/output/customer.csv' delimiter ',' CSV HEADER; COPY 599 dvdrental=# \copy film to '/vagrant/output/film.csv' delimiter ',' CSV HEADER; COPY 1000 When all the table we want to import are exported, we can use MySQL Shell with the import table utility: importTable() can work in parallel ! (not here as the data is too small) And we repeat that operation for all tables we want to import. importTable() can also import multiple files in one single table ! We can verify that the data has been imported: Conclusion Importing CSV files into a MDS instance is very easy and efficient using MySQL Shell. It can be used to import data from PostgreSQL but also from other RDBMS. Of course depending of the structure of the data, some initial work might be required, especially when some data types are different.

MySQL Database Service (aka MDS) is very popular and many users wants to benefit from the managed MySQL service and from MySQL HeatWave. We see many users migrating their data from MySQL on-premise or...

MySQL Database Service

MySQL Autopilot - Machine Learning Automation for MySQL HeatWave

Introduction MySQL is the world most popular open source database because of its reliability, high-performance, and ease of use. MySQL has been designed and optimized for transaction processing and enterprises around the world rely on it. With the introduction of HeatWave in MySQL Database Service, customers now have a single database which is efficient for both transaction processing and analytics. It eliminates the need for ETL to a specialized analytic database and provides support for real-time analytics. HeatWave is built on an innovative, in-memory query engine which is architected for scalability and performance and is optimized for the cloud. MySQL HeatWave service is faster than other database services – Snowflake, Redshift, Aurora, Synapse, Big Query - at a fraction of the cost. MySQL Autopilot  (Watch the webinar) We recently introduced MySQL Autopilot for MySQL HeatWave. MySQL Autopilot automates many important and challenging aspects of achieving high query performance at scale - including provisioning, data loading, query execution and failure handling. It uses advanced techniques to sample data, collect statistics on data and queries, and build machine learning models to model memory usage, network load and execution time. These machine learning models are then used by MySQL Autopilot to execute its core capabilities. MySQL Autopilot makes the HeatWave query optimizer increasingly intelligent as more queries are executed, resulting in continually improving system performance over time. Autopilot focuses on four aspects of the service lifecycle: system setup, data load, query execution and failure handling. Figure 1. MySQL Autopilot automates different aspects of the service to improve performance, scalability and usability of the system System Setup 1.  Auto provisioning predicts the number of HeatWave nodes required for running a workload by adaptive sampling of table data on which analytics is required. This means that customers no longer need to manually estimate the optimal size of their cluster. Data Load 2.  Auto parallel loading optimizes the load time and memory usage by predicting the optimal degree of parallelism for each table being loaded into HeatWave. 3.  Auto encoding determines the optimal representation of columns being loaded into HeatWave taking queries into consideration. This optimal representation provides the best query performance and minimizes the size of the cluster to minimize the cost. 4.  Auto data placement predicts the column on which tables should be partitioned in-memory to achieve the best performance for queries. It also predicts the expected gain in query performance with the new column recommendation. Query Execution 5.  Auto query plan improvement learns various statistics from the execution of queries and improves the execution plan of future queries. This improves the performance of the system as more queries are run. 6.  Auto query time estimation estimates the execution time of a query prior to executing the query, allowing quick tryout and test on different queries 7.  Auto change propagation intelligently determines the optimal time when changes in MySQL Database should be propagated to the HeatWave storage layer. This ensures that changes are being propagated at the right optimal cadence. 8.  Auto scheduling determines which queries in the queue are short running and prioritizes them over long running queries in an intelligent way to reduce overall wait time. Failure Handling 9.  Auto error recovery:  Provisions new nodes and reloads necessary data from the HeatWave storage layer if one or more HeatWave nodes is unresponsive due to software or hardware failure   Auto Provisioning Auto Provisioning provides recommendation on how many HeatWave nodes are needed to run a workload. When the service is started, database tables on which analytics queries are run need to be loaded to HeatWave cluster memory. The size of the cluster needed depends on tables and columns required to load, and the compression achieved in memory for this data. Figure 2 compares the traditional (i.e., manual) approach to estimating the cluster size with Auto Provisioning. In traditional provisioning, users need to guess a cluster size. Underestimation results in data load or query execution failure due to space limitations. Overestimation results in additional costs for unneeded resources. As a result, users iterate until they determine the right cluster size, and this size estimate becomes inaccurate when tables are updated.    Figure 2. Comparison of a manual provisioning vs Auto provisioning The right side of figure 2 shows how auto provisioning, a ML-based cluster size estimation advisor, solves this problem. By leveraging well trained and accurate ML models, the user consults auto provisioning advisor to obtain the right cluster size for their dataset. As a result, users do not need to guess the cluster size. Later, if the customer data grows or additional tables are added, the users can again take advantage of the auto provisioning advisor. Below is an example of the accuracy of memory prediction observed on some data sets. Datasets TPCH 1024G TPCDS 1024G Cust A  Cust B Accuracy in memory prediction 98.4% 96.9% 98.3% 96.9%   Auto Parallel Load Loading data into HeatWave involves several manual steps. The time required to perform these steps depends on the number of schemas, tables, and columns, and statistics. Auto parallel load automates these steps by predicting the degree of parallelism per table via machine-learning models to achieve optimal load speed and memory usage. Auto Encoding HeatWave supports two string column encoding types: variable-length and dictionary. The type of encoding affects the query performance as well as the supported query operations. It also affects the amount of memory required for HeatWave nodes. By default, HeatWave applies variable-length encoding to string columns when data is loaded, which may not be the optimal encoding choice for query performance and cluster memory usage for certain workloads. Figure 3. Comparison of a default encoding vs auto encoding Auto encoding provides recommendations for string columns that reduce memory usage and help improve query performance. Figure 3 shows the difference between default encoding and auto encoding. In the default case, the variable-length encoding ensures best query offload capability. However, this can impact ideal query performance due to data movement between HeatWave nodes and MySQL nodes. Auto encoding uses machine learning to analyze column data, HeatWave query history, and available MySQL node memory to identify which string columns can be coded in dictionary encoding. When the suggestion is applied, the overall query performance is improved due to reduced data movement in system, and HeatWave memory usage is reduced due to efficient (i.e., smaller) dictionary codes and the corresponding dictionaries that maintain the mapping between the strings and the codes reside in the memory of the MDS node. Auto Data Placement Data placement keys are used to partition table data when loading tables into HeatWave. Partitioning table data by JOIN and GROUP BY key columns can improve query performance by avoiding costs associated with redistributing data among HeatWave nodes at query execution time. Determining the best data placement key is a tedious task requiring understanding query access patterns and system behavior. Moreover, picking wrong partitioning keys can lead to sub-optimal performance due to increased data distribution costs during query execution time. Figure 4. Comparison of manual data placement vs auto data placement Figure 4 depicts a comparison between default query execution and execution with auto data placement. Based on machine learning models, auto data placement recommends appropriate data placement keys by analyzing table statistics and HeatWave query history and provides an estimation of query performance improvement. Once the suggestion is applied, query performance is improved by minimizing the data movement between nodes during execution. The system predicts the optimal key on which the placement should be done and also predicts the expected improvement in the execution time. Below is the an example of improvement observed with auto data placement. Dataset Time with primary key placement Predicted improvement Actual improvement TPCH 1024 332 sec 26% 37% TPCH 4096 373 sec 20% 25%   Auto Query Plan Improvement Auto query plan improvement enhances query performance by improving query plan statistics based on previous query executions. By maintaining more accurate query statistics, HeatWave creates better query plan and makes better decisions on the underlying physical operators; consequently improves the overall query performance. Figure 5. Query 2 benefits from statistics of a previous similar query (query 1) with auto query plan improvement Figure 5 shows how auto query plan improvement works without user intervention. After a query (Q1) executes on HeatWave, auto query plan improvement collects and stores the cardinalities of all operations in the query execution plan (e.g., scan, join, group by). When a similar (or identical) query arrives (Q2), the system checks whether it can take advantage of the previously collected statistics information for Q2. If the system determines a similarity between the two query plans, a better query plan is generated based on statistics information from Q1. In doing so, it improves query performance and cluster memory usage significantly. Auto Query Time Estimation Users are often interested in accurate query time estimates before running the query.  Such functionality allows users to estimate their application performance better and to understand the resource needed. Auto query time estimation not only provides  user-visible estimations for query run times, but it also uses the same building blocks internally to improve query performance by optimizing query (sub-)plans. Instead of using static, analytical models, auto query time estimation integrates a data-driven query time estimation module, which improves as queries run. To do so, HeatWave leverages load- and run-time statistics and dynamically tunes query cost models during execution. As a result, auto query time estimation improves with time as more queries are executed on the system. Auto Change Propagation Data updated in MySQL is propagated and persisted to HeatWave data layer as change logs. During data reload, HeatWave first restores data from the base data, then applies the data from the change logs. Over time, the persisted change log volume increases, which can result in an increased reload time as all the change logs need to be applied to the base data. So, the change logs are consolidated from time-to-time to alleviate increased reload latency as shown in Figure 6. However, determining when to consolidate is not an easy task, which depends on several factors such as transaction rate, system load, failure probability. Figure 6. Auto change propagation To minimize consolidation time during reloading from the storage layer, auto change propagation uses data driven mechanism to determine the best change propagation interval and choice. Auto change propagation analyzes rate of changes, incoming DMLs, object storage resources, and previously seen change activity.  As a result, the changes are propagated at the best time interval, which results in optimized consolidation time for critical system operations. Auto Scheduling Traditional database systems process queries based on their arrival time, which can result in long-running queries starving short-running queries, as shown in Figure 7. Figure 7. Traditional database system vs HeatWave auto scheduling On the left, is a sub-optimal case where three queries (Q1, Q2, Q3) from three user sessions arrive one after the other and are scheduled in the FIFO order. After the execution completes, one can identify that waiting time for Q3 could be reduced significantly with minimal impact on Q2 latency. On the right, it shows how auto scheduling improves user experience for short running queries in a multi-session application. Auto scheduling identifies and prioritizes short-running queries by automatically classifying queries into short or long queries using HeatWave data driven algorithms. Therefore, Q3 is prioritized before Q2 as Q3 is identified as a short-running query. Auto Scheduling reduces elapsed time for short-running queries significantly when the multi-session applications consist of a mix of short and long running queries. It also ensures long-running queries are not penalized and are not postponed indefinitely. Auto Error Recovery HeatWave automatically provisions a new  HeatWave node(s) when a hardware or software failure is detected on a node. When the cluster is restored, auto error recovery automatically reloads the data only to the re-provisioned node(s), allowing a very fast recovery. Conclusion MySQL HeatWave is the only MySQL based database which provides machine learning based automation. MySQL Autopilot automates the task of optimally provisioning a cluster, loading data and query processing. It makes HeatWave increasingly intelligent as more queries are executed, resulting in continually improving system performance over time. The introduction of Autopilot widens the performance and price performance advantage of  MySQL HeatWave over other database services. Additional References: Watch the MySQL Autopilot webinar Watch a short demo Learn more about MySQL Database Service Learn more about MySQL HeatWave Try it today!

Introduction MySQL is the world most popular open source database because of its reliability, high-performance, and ease of use. MySQL has been designed and optimized for transaction processing and...

Community

MySQL Database Service: administrator user

On MySQL Database Service (aka MDS) on Oracle Cloud Infrastructure (aka OCI), when you create a new instance, you also need to set the credentials for the administrator: Let’s focus on that account… Please don’t forget the password, it cannot be recovered !! First of all that account’s username has some limitations. Indeed, some names are reserved as it’s mentioned in the manual. So you could not use the following usernames: administrator ociadmin ocirpl mysql.sys mysql.session mysql.infoschema This list can evolve, you can get the list directly from the MySQL DB System creation’s screen: In fact, those accounts are already existing in the MySQL DB Instance: MySQL  select User from mysql.user where user not like 'admin'; +------------------+ | User | +------------------+ | administrator | | ocirpl | | ociadmin | | mysql.infoschema | | mysql.session | | mysql.sys | +------------------+ 6 rows in set (0.0006 sec) Some people like to use ‘root’ for their administrator username but I prefer ‘admin‘ as ‘root’ doesn’t sound correct for an account with some limitations. Because yes, this administrator account has some limitations. You should not forget that MDS is a managed service and therefore, the real administrator of the database is the MySQL Team operating it for you. Compare to the ‘root’ user on a local instance (as MySQL 8.0.26), the following privileges are not granted to the new created administrator (‘admin’) user: RELOAD SHUTDOWN FILE SUPER CREATE TABLESPACE AUDIT_ADMIN BINLOG_ADMIN BINLOG_ENCRYPTION_ADMIN CLONE_ADMIN CONNECTION_ADMIN ENCRYPTION_KEY_ADMIN FLUSH_OPTIMIZER_COSTS FLUSH_USER_RESOURCES GROUP_REPLICATION_ADMIN INNODB_REDO_LOG_ARCHIVE INNODB_REDO_LOG_ENABLE, PERSIST_RO_VARIABLES_ADMIN REPLICATION_SLAVE_ADMIN RESOURCE_GROUP_ADMIN RESOURCE_GROUP_USER, ROLE_ADMIN SERVICE_CONNECTION_ADMIN SESSION_VARIABLES_ADMIN SET_USER_ID SHOW_ROUTINE (we will discuss this one in a future article) SYSTEM_USER SYSTEM_VARIABLES_ADMIN TABLE_ENCRYPTION_ADMIN Quite a large list. This is exactly the reason I prefer not to call this administrator user ‘root’. If you want to list all available privileges, just run  'SHOW PRIVILEGES;' Some privileges are also revoked in MDS for this ‘admin’ user as mentioned again in the manual. Create another administrator account So what is the best practice to create a second administrator on my MDS instance ? You don’t need to remember all the available and authorized privileges, the MySQL Team provided exactly all is needed: the administrator role ! If you need to create a new administrator user, you just need to assign him the administrator role and it will have all the same privileges as the first administrator account at the time of its creation: MySQL  create user admin2 identified by 'MySQL8isGr3at!' default role 'administrator'; MySQL  show grants for admin2; +-------------------------------------------+ | Grants for admin2@% | +-------------------------------------------+ | GRANT USAGE ON *.* TO `admin2`@`%` | | GRANT `administrator`@`%` TO `admin2`@`%` | +-------------------------------------------+ And if you connect to your MySQL instance with the new created user you will see all the privileges granted and revoked like the first created administrator: SQL  show grants\G *************************** 1. row *************************** Grants for admin2@%: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE ROLE, DROP ROLE ON *.* TO `admin2`@`%` WITH GRANT OPTION *************************** 2. row *************************** Grants for admin2@%: GRANT APPLICATION_PASSWORD_ADMIN,BACKUP_ADMIN,CONNECTION_ADMIN,FLUSH_TABLES,REPLICATION_APPLIER,ROLE_ADMIN,XA_RECOVER_ADMIN ON *.* TO `admin2`@`%` WITH GRANT OPTION *************************** 3. row *************************** Grants for admin2@%: REVOKE INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql`.* FROM `admin2`@`%` *************************** 4. row *************************** Grants for admin2@%: REVOKE CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `sys`.* FROM `admin2`@`%` *************************** 5. row *************************** Grants for admin2@%: GRANT `administrator`@`%` TO `admin2`@`%` Summary To summarize, it’s important to remember that: you should not forget the administrator account’s password the administrator account has all the required privileges to deal with daily MySQL DBA tasks (create users, tables, get statistics out of performance_schema, …) but the administrator account cannot “operate” the instance, the operations (binlogs, tablespaces, …) are MySQL Team’s responsibility you can easily create other administrator accounts using the specific role Thank you for using MySQL and MDS !

On MySQL Database Service (aka MDS) on Oracle Cloud Infrastructure (aka OCI), when you create a new instance, you also need to set the credentials for the administrator: Let’s focus on that account… Plea...

Community

MySQL 8.0: all you need to know about SDI

We recently saw that .frm files have been somewhat of replaced for MyISAM tables in MySQL 8.0 (see this post). However, what are those files ? Can they be used for something else than MyISAM tables ? … Let’s try to answer those questions and get more familiar with the .sdi files. What does SDI stand for ? SDI acronym stands for Serialized Dictionary Information. As you may know already, MySQL 8.0 replaced the old way to store the metadata of tables, their structure, into the new transactional Data Dictionary (in InnoDB). Additionally, that information is also part any InnoDB tabespace , its is appended to the tablespace, so the meta data and data are bundled together. For storage engine not supporting this feature, an external file, the actual SDI is created. This is the case for example for MyISAM tables. What do they look like ? So we know that for storage engine not having the capability to store the metadata of the tables, a SDI file is created. What does it look like ? In fact the SDI file is a compact JSON file. It’s name is formed by the table's name and the table’s id. If we take the example table from the previous post, on the file system, the SDI file looks like this: [root@imac my_isam_db]# ls -l *.sdi -rw-r----- 1 mysql mysql 3600 Aug 2 12:49 t1_5017.sdi 5017 is the table’s hidden id. In fact, it’s a string representation of the OID of the dictionary object. The OID ensures uniqueness, which is required since several tables may map to the same name. Now let’s have a look at what that JSON looks like: { "mysqld_version_id": 80026, "dd_version": 80023, "sdi_version": 80019, "dd_object_type": "Table", "dd_object": { "name": "t1", "mysql_version_id": 80026, "created": 20210802104936, "last_altered": 20210802104936, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 4, "is_nullable": false, "is_zerofill": false, "is_unsigned": true, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1, "char_length": 10, "numeric_precision": 10, "numeric_scale": 0, "numeric_scale_null": false, "datetime_precision": 0, "datetime_precision_null": 1, "has_no_default": false, "default_value_null": false, "srs_id_null": true, "srs_id": 0, "default_value": "AAAAAA==", "default_value_utf8_null": true, "default_value_utf8": "", "default_option": "", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "", "engine_attribute": "", "secondary_engine_attribute": "", "column_key": 2, "column_type_utf8": "int unsigned", "elements": [], "collation_id": 8, "is_explicit_collation": false }, { "name": "name", "type": 16, "is_nullable": true, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": false, "is_virtual": false, "hidden": 1, "ordinal_position": 2, "char_length": 20, "numeric_precision": 0, "numeric_scale": 0, "numeric_scale_null": true, "datetime_precision": 0, "datetime_precision_null": 1, "has_no_default": false, "default_value_null": true, "srs_id_null": true, "srs_id": 0, "default_value": "", "default_value_utf8_null": true, "default_value_utf8": "", "default_option": "", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "", "engine_attribute": "", "secondary_engine_attribute": "", "column_key": 1, "column_type_utf8": "varchar(20)", "elements": [], "collation_id": 8, "is_explicit_collation": false }, { "name": "inserted", "type": 18, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": false, "is_virtual": false, "hidden": 1, "ordinal_position": 3, "char_length": 19, "numeric_precision": 0, "numeric_scale": 0, "numeric_scale_null": true, "datetime_precision": 0, "datetime_precision_null": 0, "has_no_default": false, "default_value_null": false, "srs_id_null": true, "srs_id": 0, "default_value": "AAAAAA==", "default_value_utf8_null": false, "default_value_utf8": "CURRENT_TIMESTAMP", "default_option": "CURRENT_TIMESTAMP", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "", "engine_attribute": "", "secondary_engine_attribute": "", "column_key": 1, "column_type_utf8": "timestamp", "elements": [], "collation_id": 8, "is_explicit_collation": false } ], "schema_ref": "my_isam_db", "se_private_id": 18446744073709552000, "engine": "MyISAM", "last_checked_for_upgrade_version_id": 0, "comment": "", "se_private_data": "", "engine_attribute": "", "secondary_engine_attribute": "", "row_format": 2, "partition_type": 0, "partition_expression": "", "partition_expression_utf8": "", "default_partitioning": 0, "subpartition_type": 0, "subpartition_expression": "", "subpartition_expression_utf8": "", "default_subpartitioning": 0, "indexes": [ { "name": "PRIMARY", "hidden": false, "is_generated": false, "ordinal_position": 1, "comment": "", "options": "flags=0;", "se_private_data": "", "type": 1, "algorithm": 2, "is_algorithm_explicit": false, "is_visible": true, "engine": "MyISAM", "engine_attribute": "", "secondary_engine_attribute": "", "elements": [ { "ordinal_position": 1, "length": 4, "order": 2, "hidden": false, "column_opx": 0 } ] } ], "foreign_keys": [], "check_constraints": [], "partitions": [], "collation_id": 8 } } As you can see, this SDI file contains a lot of information. Just for reference this was the table’s creation statement: CREATE TABLE `t1` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `name` varchar(20) DEFAULT NULL, `inserted` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 Are SDI files used only with MyISAM ? That SDI file can be used to copy MySIAM tables around MySQL 8.0 servers using the specific statement IMPORT TABLE FROM. The SDI files are also used for any other Storage Engine that is not able to embed the metadata information. Currently only InnoDB and NDB don’t require any SDI file. If you create a CVS table, it will also have a SDI: CREATE TABLE `t2` ( `id` int unsigned NOT NULL, `name` varchar(20) NOT NULL DEFAULT '', `inserted` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ) ENGINE=CSV DEFAULT CHARSET=latin1 [root@imac my_isam_db]# ls -l t2* -rw-r----- 1 mysql mysql 3268 Aug 11 13:27 t2_5037.sdi -rw-r----- 1 mysql mysql 35 Aug 11 13:27 t2.CSM -rw-r----- 1 mysql mysql 0 Aug 11 13:27 t2.CSV And of course we can check that information from the SDI file too: [root@imac my_isam_db]# cat t2_5037.sdi | jq ."dd_object"."engine" "CSV" The SDI files provides metadata redundancy. And even for InnoDB tables it can be useful if for example the data dictionary becomes unavailable, object metadata can be extracted directly from InnoDB tablespace files. SDI Information and InnoDB For InnoDB tables, the SDI information can be extracted from the tablespace using a tool called ibd2sdi. ibd2sdi can be run on file-per-table tablespace files (*.ibd files), general tablespace files (*.ibd files), system tablespace files (ibdata* files), and the data dictionary tablespace (mysql.ibd). It is not supported for use with temporary tablespaces or undo tablespaces. Let’s try it then with InnoDB. I just created another table ((t3) like the previous ones but using InnoDB as storage engine and indeed in the schema’s directory we can see this single .ibd file: [root@imac my_isam_db]# ls t1_5017.sdi t1.MYD t1.MYI t2_5037.sdi t2.CSM t2.CSV t3.ibd Now we can generate a SDI file from it: # ibd2sdi t3.ibd -d t3.sdi We can parse the new created SDI file but it’s not required by MySQL, it's just an extra copy of the metadata. Can I modify manually an existing SDI file ? Of course, this is not recommended. However if you delete or modify a .sdi file after the table was already in MySQL, the server won’t require it as the structure is already part of the Data Dictionary. Every time you modify a table using an engine that requires a SDI file, a new file with a different OID number replaces the eventual previous one.

We recently saw that .frm files have been somewhat of replaced for MyISAM tables in MySQL 8.0 (see this post). However, what are those files ? Can they be used for something else than MyISAM tables ? …...

Community

MySQL Database Service: Snapshot your data – restore

On Oracle Cloud Infrastructure, aka OCI, MySQL Database Service, aka MDS uses the snapshot technology to backup your data. This article is the second part of multiple part blog series. In part 1, we dealt with taking snapshots as backup. In this part, we deal with restoring the data. Managed Snapshots We saw in the post one how to create the snapshots automatically (Backup Plan) or Manually. Both type of backups are what we call Managed Snapshots. This means that as a user, we don’t have access to the Object Storage where the backup is stored. So what’s the purpose of these snapshots ? Let’s see them: Data Restore Restoring a MDS backup is recommended of course to save the data and in case of issue (hardware or software problem unlikely or human issue more likely), we have the possibility to recover the data as it was at the time of the backup. There are some important points to know before dealing with the backups and restoring them: if you delete an instance, all the automatic backups (from the Backup Plan) are also gone manual backups are kept for the entire retention period if you restore a backup from an active DB System, you will get a warning as some source values, such as IP address still exist. Let’s see how we can restore a snapshot: Then we get the wizard screen where we have some options we can change. The first section is about changing the compartment, the name and the description of the new MySQL DB System: As you can see, it’s not possible to enable HA if the source wasn’t a HA instance too. You can modify the placement by choosing a different Availability Domain (or Fault Domain): And by default the shape is the same as the snapshot’s source: You finally need to click on Restore and the new instance will be deployed with the same data as the backup. As this is a Volume Group Snapshot, we can see it as a filesystem snapshot. Therefor, to restore at the same consistent point as the source, InnoDB Recovery process is needed and we can see it in the error log: | 2021-07-30 17:56:50.065844 | 1 | Note | MY-012551 | InnoDB | Database was not shutdown normally! | | 2021-07-30 17:56:50.065915 | 1 | Note | MY-012552 | InnoDB | Starting crash recovery. | | 2021-07-30 17:56:50.317713 | 1 | Note | MY-013086 | InnoDB | Starting to parse redo log at lsn = 263759567374, whereas checkpoint_lsn = 263759567747 and start_lsn = 263759567360 After that operation, the system is ready and available: HeatWave We saw previously that HA and HeatWave were not possible during the restore operation. However, as the Shape was compatible with HeatWave, we can still enable it after restore: Change Shape Changing the Shape of your MySQL DB System in OCI is something that almost every new user does. Indeed, you start with a small instance to test and then you want that test to become more important and use it more often. But you quickly realize that you would benefit from a bigger server. More CPU power and more RAM. Or you just want to use HeatWave and you didn’t select a compatible Shape at start. Restoring a backup is the perfect operation to achieve that goal. One important point is that you cannot restore on a shape with less memory: Increase Size As we did for the Shape, it’s also possible to restore using the same Shape but increasing the disk space. This is an easy task, you just click on Restore to New DB System and you increase the size (you can’t decrease): Conclusion Restoring a snapshot is very useful to create a new identical instance of your MySQL DB System or to recover data. It’s also useful to increase the Disk Data Storage. And finally to upgrade the Shape or migrate to a HeatWave compatible shape.  

On Oracle Cloud Infrastructure, aka OCI, MySQL Database Service, aka MDS uses the snapshot technology to backup your data. This article is the second part of multiple part blog series. In part 1, we...

Community

MySQL Database Service: Snapshot your data - backups

On Oracle Cloud Infrastructure, aka OCI, MySQL Database Service, aka MDS uses the snapshot technology to backup your data. I will explain how it works in the multiple part blog series. In part 1, we will deal with taking snapshots as backup. In part 2, we will see how to restore data and for which usage. Snapshots The Oracle Cloud Infrastructure Block Volume service provides you with the capability to group together multiple volumes in a volume group. This is exactly what MDS uses to store the data. We can easily see this from some global variables: +-----------------------------+--------------------------------+ | Variable_name | Value | +-----------------------------+--------------------------------+ | datadir | /db/data/ | | innodb_data_home_dir | /db/data/ | | innodb_log_group_home_dir | /db/redo | | innodb_tmpdir | /db/tmp | | log_bin_basename | /db/binlogs/binary-log | | log_error | /db/log/error.log | | tmpdir | /db/tmp | | ... +-----------------------------+--------------------------------+ /db is the Block Volume Group containing all volumes. This technique simplifies the process to create time-consistent backups. You can then restore an entire group of volumes from a volume group backup. The snapshot is pushed to Object Storage (encrypted). Backup In MDS, it’s very easy to take a backup. You have the possibility to configure automatic backups (default) or do create such backup manually. Those backups are used to restore the data on a new instance or to migrate to another shape. You can only migrate to an equivalent or more powerful shape. Backup Plan (automatic) A backup plan can be configured during the instance’s creation or added at anytime after. Automatic backups are retained for between 1 and 35 days. The default retention value is 7 days. Once defined, it is not possible to edit the retention period of an automatic backup. If you delete a MDS System, automatic backups are also deleted. Not the manual ones. This is how it looks like during the MDS instance creation: It seems that the Backup Window can be set only at the instance’s creation, after it always stays 00:00 UTC. When a Backup Plan is defined for a MDS System, you will see it in the Backups section with a blue information frame like this: If no Backup Plan is configured, you will see an orange frame like this: Manual Backups In MDS, you also have the possibility to manually start a backup/snapshot of your database. This backup cab be initiated by an action in the console, or by a request made through the API. Manual backups can be retained for a minimum of 1 day and a maximum of 365 days. Currently, there is a limit of 100 manual backups per tenancy. This limit can be increased per customer request. Creating a backup from the OCI Dashboard is easy. You need to select the MySQL DB System you want to backup and ion the left, click on Backups, then just click on the “Create Manual Backup” button: If you don’t have any backup of the DB System already, the only option is to create a Full Backup: As you can see, you can specify the retention period and add tags if you want. In case you already have a Full Backup, you also have the possibility to create Incremental Backups that will use less storage of course: Manual Backup from CLI It’s also possible to start a manual backup directly form the command line. Let’s use the Cloud Shell to illustrate this: This will create a new backup and we can see it in the dashboard: Backup Details Every backup has interesting details: You can see (blue lines) that the snapshot was very fast: 100G in 20secs. This is an online snapshot, the MySQL DB System doesn’t need to be stopped. It’s also possible to change the name, the description and the retention days by clicking on Edit: Conclusion Backups on MDS are performed using snapshots. They are very quick and doesn’t require downtime. Those backups can be also used to migrate to other shapes. On the next post, we will see how to Restore such backups. Thank you for using MySQL and MySQL Database Service !

On Oracle Cloud Infrastructure, aka OCI, MySQL Database Service, aka MDS uses the snapshot technology to backup your data. I will explain how it works in the multiple part blog series. In part 1, we...

Community

Using OCI Cloud Shell & Bastion with MySQL Database Service

Recently, Oracle added a Bastion Service to OCI. And you may also have noticed that the OCI Dashboard offers you the possibility to use a browser based terminal: Cloud Shell. Today, we will see how we can use these two components to connect from the browser to a MDS DB System. We need the MySQL DB System’s IP: So in this example, the MDS Instance we want to connect to has 10.0.0.99 as IP. Bastion Service Now we will create a new Bastion Service that will allow us to create a SSH Tunnel to our MySQL DB System. The Bastion Service’s dashboard is located in Identity & Security: If this is the first time you create a Bastion, the list will be empty and you just need to create one: We need to select the VCN, the subnet and a block of allow IPs. As I don’t know the IP of the Cloud Shell, I will use 0.0.0.0/0: If you don't like to use 0.0.0.0/0, you need to add the public IP used by Cloud Shell with \32: fdescamp@cloudshell:~ (us-ashburn-1)$ curl ifconfig.me 1XX.XXX.XXX.XXX Now that the Bastion is created, I need to create ta session that will be used to create the SSH Tunnel to MDS. But before creating the session, we will start the Cloud Shell and generate a SSH Key we will use for the tunnel’s session. Cloud Shell To Start Cloud Shell, we just click on this icon on the top right corner of the OCI’s Dashboard: This will open the Cloud Shell in the browser, the first time it takes some time to generate it. In the Cloud Shell, we can now create the SSH Key we need using the following command: $ ssh-keygen -t rsa This is how it looks like and as you can see the public key we need will be stored in ~/.ssh/id_rsa.pub: Bastion Session As we have now all we need to create the Bastion Session for the SSH Tunnel, we can go back to the Bastion we created earlier and create a new session: We need to choose the SSH port forwarding session as Type, add the MySQL Database System’s IP and paste the SSH Public Key. e session was created for 180 minutes, you will have something like this: If you click on the 3 vertical dots, you can view or copy the ssh command we need to run in Cloud Shell: -i <privateKey> is not really required as we only have one single key for the moment. And the error message “bind: Cannot assign requested address” is not a problem, this is just because the Cloud Shell tries to bind on ipv6 too. If you want to avoid it, just add -4 between ssh and -i like this: ssh -4 -i Please mind the & at the end of the ssh command. Connecting to MDS And finally, we can connect to MySQL Database Service’s instance from Cloud Shell simply using MySQL Shell: As you can see it’s easy to connect from Cloud Shel once the Tunnel is ready. Enjoy MySQL, MySQL Database Service and OCI ! For Dump & Load data to/from MDS , I recommend using a dedicated compute instance with multiple cores instead of Cloud Shell.

Recently, Oracle added a Bastion Service to OCI. And you may also have noticed that the OCI Dashboard offers you the possibility to use a browser based terminal: Cloud Shell. Today, we will see how we...

Community

MySQL Shell Dump & Load and Compression

MySQL Shell is the popular tool to work with MySQL and it integrates perfectly everything for MySQL Database Service (MDS) in Oracle Cloud Infrastructure (OCI). For any logical dump and load of data and especially to dump data to MDS, MySQL Shell Utility is the recommended solution. MySQL Shell Dump & Load is faster, is parallel and compatible with OCI (block storage, MDS grants, automatic primary key creations, ...) This post is about compression. By default, MySQL Shell Dump uses zstd compression. Zstd, short for Zstandard, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level compression ratio. It's possible to specify the compression algorithm to use for MySQL Shell Dump Utility with the compression option. The accepted values are: none zstd (default) gzip If you plan to use compression and multi-threads (parallel), it's recommended to have a powerful instance for MySQL Shell. To compare the compression algorithms and the old mysqldump, I use a server with 8 cores (Intel(R) Xeon(R) Platinum 8167M CPU @ 2.00GHz - VM.Standard2.4 on OCI). The data is one single table of 47GB InnoDB Data !   Dumping Data The default parallelism of MySQL Shell is to use 4 threads. If you have large dataset to dump & load, I recommend to have a machine with more cores and specify more threads with the threads option like: JS  util.dumpSchemas(["ontime"], "/home/opc/dump_zstd",{"threads": 32}) MySQL Shell with Zstd Let's start with MySQL Shell Dump and zstd (the default): JS  util.dumpSchemas(["ontime"], "/home/opc/dump_zstd",{"compression": "zstd"}) Duration: 00:01:49s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size: 34.64 GB Compressed data size: 3.03 GB Compression ratio: 11.4 Rows written: 105336666 Bytes written: 3.03 GB Average uncompressed throughput: 317.36 MB/s Average compressed throughput: 27.80 MB/s MySQL Shell with gzip Now, we try the same with gzip as compression method: JS  util.dumpSchemas(["ontime"], "/home/opc/dump_gzip",{"compression": "gzip"}) Duration: 00:03:33s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size: 34.64 GB Compressed data size: 4.22 GB Compression ratio: 8.2 Rows written: 105336666 Bytes written: 4.22 GB Average uncompressed throughput: 162.36 MB/s Average compressed throughput: 19.76 MB/s mysqldump As I already claimed, mysqldump (and mysqlpump), should not be used anymore, but just for info, mysqldump+gzip took 21m12sec and mysqldump+zstd took 15m20sec. Results If we check the size of the dump and the speed, it's obvious that MySQL Shell with Zstd (compression ratio of 11.1) is the best option: [root@mysql-shell opc]# du -sh * 4.0G dump_gzip 2.9G dump_zstd 4.1G mysqldump_gzip.sql.gz 3.8G mysqldump_zstd.sql.zst As I wrote earlier, the power of the MySQL Shell machine matters. We can compare the CPU usage: MySQL Shell uses all the power of the instance but this is of course not a problem on a dedicated instance for such operation. Conclusion By default MySQL Shell Dump & Load utility provides a very good compression algorithm that can save you a lot of disk space. Additionally, with it's parallelism, the Load & Dump is much faster than the traditional mysqldump. MySQL Shell can dump AND load in parallel even in a single table ! As you could notice, we started with 47GB and finished with 3GB of data, which is a compression of 93% ! If you are looking for a logical dump solution, MySQL Shell is the best option. Thank you for using MySQL and MySQL Shell.

MySQL Shell is the popular tool to work with MySQL and it integrates perfectly everything for MySQL Database Service (MDS) in Oracle Cloud Infrastructure (OCI). For any logical dump and load of data...

MySQL Cluster

Scale-up MySQL NDB Cluster 8.0.26 to +1.5M QPS the easy way with AMD EPYC 7742

On July 20th, 2021, we’ve celebrated the release of MySQL NDB Cluster 8.0.26. MySQL NDB Cluster (or NDB for short) is part of the MySQL family of open-source products providing an in-memory, distributed, shared-nothing, high-availability storage engine usable on its own or with MySQL servers as front-ends. For the complete changeset see release notes. Download it here.  Choosing a database can be an overwhelming task, requiring to consider performance (throughput and latency), high availability, data volume, scalability, ease of use/operations, etc. These considerations are affected by where the database runs — whether that is in a cloud provider such as Oracle Cloud Infrastructure offering a broad range of infrastructure from small Virtual Machines (VM) to large Bare Metal (BM) instances, and High-Performance Computing (HPC) servers or one’s own on-premises hardware. When aiming at the very best performance, databases can be complex beasts requiring understanding and experimenting with hundreds of different tuning parameters. This task can be made even more complex by going one level deeper, into the operating system, tweaking kernel settings to best match the database requirements. Finally, tuning a database is done for a specific workload and the same tuning settings might result in sub-optimal performance when running different workloads — yet another complication. All of this can be a lot of fun and very painful at the same time. Can we achieve a high-performance and highly-available cluster in a easy way? Having access to a brand new high-end Dell EMC PowerEdge R7525 server, dual-socket AMD EPYC 7742 with 2 TB RAM, and four 3TB NVMe SSD we set ourselves to explore how to do an easy setup for a high-performance, highly-available cluster in a single box with the newly released MySQL NDB Cluster 8.0.26. Jumping ahead and answering the question — yes —using sysbench OLTP point select benchmark, we can easily achieve a constant throughput of over 1.5M primary key lookups per second with a two data-node cluster, each data-node configured with 32 CPU, using a total of 16 MySQL servers and 1024 clients (sysbench threads). The chart above shows the results of a 1h long run. Looking at the blue line, we can see a constant throughput in the range of 1.6–1.7 million queries per second (primary-key lookups). Max recorded throughput is at 1,716,700 primary key lookups per second. Also important is the 95th percentile latency, the red line, which is in the range of 1.1–1.6 milliseconds and has an average of 1.35 milliseconds. So how easy is it to configure MySQL NDB Cluster for these results? It was pretty straightforward! In the following sections, we will describe in detail the hardware, NDB configuration, benchmark setup, and the analyses of intermediate results that lead to those performance numbers. Hardware setup All our tests run on a single Dell EMC PowerEdge R7525 server configured with: Dual 2nd generation AMD EPYC 7742 (Rome) 64-Core processor, 128 threads, 7nm design, max clock speed of 3900 MHz, and 256 MB cache 32 x 64 GB DDR4 dimms (SK Hynix), 3200 MT/s speed, total of 2 TB RAM 4 x 3.2TB NVMe drives (Dell Express Flash PM1725b) Similar specification servers are available by choosing the BM.Standard.E3.128 shape currently offered in Oracle Cloud Infrastructure (OCI). For the list of available shapes and their specifications see here. The server was installed with Oracle Linux Server 8.3 using Kernel version 5.4.17 (2021.7.4). Benchmarking setup All our benchmarks were run with sysbench 1.1.0 version available from https://github.com/akopytov/sysbench. No changes to the benchmark code were done for transparency and reproducibility purposes. We choose to use sysbench as it is well-known and simple to use benchmark used to evaluate database performance under different load scenarios. Our dataset uses 8 tables and 10M rows per table using around 60GB of memory. This configuration is the most common starting point for many benchmarks done in the MySQL team (for both InnoDB and NDB Cluster storage engines). The data is large enough to not just run on CPU cache only but not large enough to involve too much IO activity (e.g. long-duration node restarts or dataset initialization). We used OLTP point-select workload consisting of primary-key lookup queries returning a constant string value. This workload tests the full database stack (MySQL servers and NDB storage engine) for overall code efficiency and the best possible query execution latency. Key generation is done using the default uniform distribution algorithm. Sysbench is run in the same machine as the database, connecting to MySQL servers via Unix sockets. MySQL NDB Cluster setup A minimal recommended high-availability scenario requires 4 hosts: 2 hosts running data nodes and 2 hosts running management nodes and MySQL servers or applications (see FAQ). In this scenario, any of the hosts can be unavailable without impact on the service. Using a single-box setup software-level redundancy can be supported by running two data nodes and multiple MySQL servers or applications. In this scenario, we can perform online operations such as online upgrades without service impact. To take advantage of a server like the one we’re using, we can equally split machine resources for each data node and set of MySQL servers. MySQL NDB Cluster “Cluster-in-a-box” setup using dual-socket server In this setup, we use a single NUMA node per socket (physical CPU). The server supports configuring up to 4 NUMA nodes per socket (a total of 8 NUMA nodes — see AMD Tuning Guide for more info). For each NUMA node, we run a single data node and a balanced number of MySQL servers accessing half of the available memory. Note that, despite MySQL NDB is an in-memory database, disk-checkpointing is enabled by default (and a recommended setting). In our setup, all NVMe disks are available from a single NUMA node only (ideally we would have half of the disks per NUMA node). Having defined the cluster topology, using two data nodes and several MySQL servers, the next step is to define how many CPU resources to allocate to NDB and MySQL server processes. We have found that a 25/75 CPU allocation provides a good starting point.   MySQL NDB Cluster has been designed to be very efficient and it’s natural that it requires fewer resources than MySQL server. The actual division of resources will depend on the workloads. In cases where queries can be pushed to the data nodes, it will make sense to reserve more CPU for NDB. In cases where SQL-level aggregations or functions are performed, then more CPU is necessary for MySQL server. With the above resource allocation, for each socket (physical CPU) we’ll reserve 16 cores (32 threads) for NDB data node (ndbmtd) process and 48 cores (96 threads) for MySQL server (mysqld) processes. The main NDB Cluster configuration is:  [ndbd default] NoOfReplicas = 2 DataMemory = 128G # Auto configures NDB to use 16 cores/32 threads per data node AutomaticThreadConfig = 1 NumCPUs = 32 NoOfFragmentLogParts = 8 # Prevents disk-swapping LockPagesInMainMemory = 1 # Enables Shared-Memory Transporters (20% performance gain) UseShm=1 # Allocates sufficient REDO log to cope with sysbench prepare step RedoBuffer=256M FragmentLogFileSize=1G NoOfFragmentLogFiles=256 The key elements of this configuration are: NoOfReplicas: defines the number of fragment replicas for each table stored in the cluster. With two data nodes, it means that each will contain all the data (ensuring redundancy in case any of the data nodes is down). DataMemory: the amount of memory used to store in-memory data. We have set it to 128G in our benchmark but we could increase it up to 768G given that we have 1TB of RAM available per data node (still leaving a big margin for the operating system). AutomaticThreadConfig: when enabled allows the data node to define which NDB-specific threads to run. NumCPUs: restricts the number of logical CPUs to use. We have set it to 32 which means that we’re expecting NDB to take advantage of the 16 cores / 32 threads available. NoOfFragmentLogParts: optional configuration, sets the number of parallel REDO logs per node. We have set it to 8 because there will be 8 LDM threads when using NumCPUs=32. This enables each LDM thread to access REDO log fragments without using mutexes — leading to slightly better performance. LockPagesInMainMemory: prevent swapping to disk, ensuring best performance. We have set to 1 in which we lock the memory after allocating memory for the process. UseShm: enables shared memory connection between data nodes and MySQL servers. This is a must when co-locating MySQL servers with data nodes as it provides a 20% performance improvement. The other configuration options are required only to run sysbench prepare command used to fill data in the database. They have no impact when running OLTP point select workload but might have in other workloads.  The management-node and data-node specific options are: [ndb_mgmd] NodeId = 1 HostName = localhost DataDir = /nvme/1/ndb_mgmd.1 [ndbd] NodeId = 2 HostName = localhost DataDir = /nvme/1/ndbd.1 [ndbd] NodeId = 3 HostName = localhost DataDir = /nvme/2/ndbd.2 These options define one management node and two data nodes. For each, we set unique identifiers (NodeId), the hosts from where they will be running (HostName), set to localhost, and finally, the path where to store required files (DataDir). The final configuration required for NDB processes is to add API nodes required allowing MySQL servers and NDB tools to connect to the cluster. An excerpt of those configurations are: [mysqld] NodeId = 11 HostName = localhost ... [api] NodeId = 245 ... For a complete list of data node configuration parameters see here. Finally, the MySQL server configuration is as follows: [mysqld] ndbcluster ndb-connectstring=localhost max_connections=8200 # Below three options are for testing purposes only user=root default_authentication_plugin=mysql_native_password mysqlx=0 [mysqld.1] ndb-nodeid=11 port=3306 socket=/tmp/mysql.1.sock basedir=/nvme/3/mysqld.1 datadir=/nvme/3/mysqld.1/data [mysqld.2] ndb-nodeid=12 port=3307 socket=/tmp/mysql.2.sock basedir=/nvme/4/mysqld.2 datadir=/nvme/4/mysqld.2/data ... There are three important settings specified under [mysqld] that are needed by all MySQL. Those are: ndbcluster: enables NDB Cluster storage engine; ndb-connectstring: explicitly sets the address and port of all management nodes used to connect to NDB Cluster. In our case, where the management node is run locally, this setting is optional; max_connections: optional, required when running benchmarks with a large number of clients; For each MySQL server we need to define individual configuration at least for port, socket, basedir, and datadir. The complete configuration files and instructions are available from https://github.com/tiagomlalves/epyc7742-ndbcluster-setup Running MySQL NDB Cluster, MySQL server, and sysbench To run MySQL NDB Cluster and MySQL Server we assume all packages are installed in the system and available in the path. We assume that sysbench has been compiled and installed in the system.  We use numactl to set process affinity to specific CPUs / NUMA nodes according to the above-defined setup, where we reserve 25% of CPU capacity for NDB data nodes and the remainder 75% CPU capacity for other processes.  To run the management node: $ numactl -C 60-63,188-191,124-127,252-255 \ ndb_mgmd \ --ndb-nodeid=1 \ --configdir="/nvme/1/ndb_mgmd.1" \ -f mgmt_config.ini Note that the management node (ndb_mgmd) consumes very little resources and can be run from any logical cpu. The above numactl settings allow ndb_mgmd to run from any CPU in any NUMA node except for those reserved for data nodes. To run data nodes: $ numactl -C 0-15,128-143 \ ndbmtd --ndb-nodeid=2 $ numactl -C 64-79,192-207 \ ndbmtd --ndb-nodeid=3 The first data node, having nodeid 2, is run in the first 16 cores of the first NUMA node (NUMA #0), and hence affinity is set to CPUs 0–15 and 128–143. The second data node, having nodeid 3, is run in the first 16 cores of the second NUMA node (NUMA #1) and hence affinity is set to CPUs 64–79 and 192–207. To run MySQL servers: $ numactl -C 16-63,144-191 \ mysqld \ --defaults-file=my.cnf \ --defaults-group-suffix=.1 $ numactl -C 80-127,208-255 \ mysqld \ --defaults-file=my.cnf \ --defaults-group-suffix=.2 Multiple MySQL servers are run per NUMA node. We decided to run all odd -numbered MySQL servers in NUMA #0 (CPUs 16–63 and 144–191) and even -numbered MySQL servers in NUMA #1 (CPUs 80–127 and 208–255). Multiple MySQL servers in the same NUMA node will share all CPUs except those reserved for the data nodes. It’s possible to have each MySQL server process running in a dedicated set of CPUs preventing shared CPU resources between processes. This approach requires careful validation as discussed later. To run sysbench: $ THREADS=1024 ; \ MYSQL_SOCKET=/tmp/mysql.1.sock,/tmp/mysql.2.sock,... ; \ numactl -C 16-63,144-191,80-127,208-255 \ sysbench \ --db-driver=mysql \ --mysql-storage-engine=ndbcluster \ --mysql-socket="${MYSQL_SOCKET}" \ --mysql-user=root \ --tables=8 \ --table-size=10000000 \ --threads="${THREADS}" \ --time=300 \ --warmup-time=120 \ --report-interval=1 \ oltp_point_select run We run Sysbench in the same machine as MySQL NDB Cluster and MySQL servers using Unix sockets. In typical scenarios, however, applications and database are run from different hence requiring using the TCP/IP network stack instead. In such scenarios it’s expected an inferior performance than what we report here. All our runs have a duration of 300 seconds (5 minutes) with a warm-up period of 120 seconds (2 minutes). In practice, we have seen that with 2 minute warm-up duration, it suffices to run the benchmark for 1–2 minutes. We have validated running the benchmark for over 1h periods and there was not significant variation in the mean throughput recorded. Scaling up a single MySQL server Our first step when benchmarking MySQL NDB Cluster was to start with a single MySQL server. This gives us a base understanding of the specific workload we’re testing helping us to know how to allocate resources and fine tune further parameters.   In the chart above we show the throughput (measured in queries per second), depicted in blue, and the CPU utilization of NDB data nodes (ndbmtd), depicted in green, and MySQL server (mysqld), depicted in yellow. Throughput scale is shown in the left y-axis. CPU utilization is shown in the right y-axis. On the x-axis, we show the number of clients (sysbench threads) used for each run. This chart shows that when increasing from 1 up to 32 clients, throughput and mysqld CPU utilization increase accordingly. In the same range, ndbmtd CPU utilization increases only slightly.  At 32 clients, we reach the maximum throughput of ~230K queries per second. MySQL server (mysqld) CPU utilization is about ~1400% meaning that a total of 14 logical cpus are being used.  From 32 to 64 clients, we see mysqld CPU utilization almost doubling to (2500% — 25 logical cpus) causing a slight throughput degradation. From 64 clients to 128 we see further degradation of throughput and the CPU utilization curve for mysqld flattens meaning that the MySQL server is saturated. At this stage, mysqld is using ~3000% of CPU (30 logical CPUs) out of the 96 logical CPUs available (48 cores / 96 threads). From 128 clients onward, there’s no further increase in throughput or CPU utilization. This chart means that the optimal throughput conditions using a single MySQL server for OLTP point select workload happens at 32 clients and around 64 clients we reached the maximum of MySQL server.  We know that the bottleneck is in the MySQL server and not in NDB Cluster because NDB utilization is fairly low. This can be seen using the ndb_top tool:   The above screenshot shows the CPU utilization of the different NDB threads (ldm, query, tc, send, recv, and main). For more info on the NDB internals see here. From our configuration, we know that we have 3 recv threads but we have only one at 40% and the remainder idle. We also see that most other threads have a low utilization < 80%. This confirms that the bottleneck is in the MySQL server side and not in NDB. To address the MySQL bottleneck, we can simply scale up the number of MySQL servers. As shown above, the optimal conditions for throughput happen when using 32 clients for which 14 logical CPUs are used by MySQL server. Considering that we have 48 cores / 98 threads per socket, we can have ~98/14 = 7 MySQL servers per socket. Rounding up this gives us around 8 MySQL servers per socket. Scaling up multiple MySQL servers When a single MySQL server becomes saturated we can continue to scale by adding more MySQL servers. Previously we estimated that we could use up to 8 MySQL servers per socket, or 16 MySQL servers in total.   The above chart shows a series of tests done with an increasing number of clients (sysbench threads) for 1 to 16 MySQL servers. In this chart, we record the average queries per second over a 300 second period. The chart shows that after 32 clients, we need to double the number of MySQL servers to sustain an increasing throughput with more clients (sysbench threads). This is what we expected. Also note that, when going from 8 to 16 MySQL servers, we are no longer capable to double the throughput. Maximum throughput is reached with 16 MySQL servers using 1024 clients (sysbench threads). When adding extra clients the throughput starts to degrade meaning the system is becoming saturated. Also interesting is to look at the latency when running an increasing number of clients for a different number of MySQL servers.   When using a single MySQL server, the 95th percentile latency is below 0.5ms up to 32 clients (sysbench threads) and then exponentially grows when more clients are used. Doubling the number of MySQL servers allows doubling the number of clients (sysbench threads) keeping the same low latency below 0.5ms. However, when using 8 or more MySQL servers, we can no longer keep latencies below 0.5ms using 512 clients or more — meaning the system starts to be saturated. Note as well that there’s no significant difference in latency between using a total of 8 or 16 MySQL servers (16 being a bit worse from 2048 clients or higher). The above charts show the average throughput and 95th percentile latency for a full run. These aggregated values provide little information about the throughput and latency stability during the run. This shown below:   The above chart shows the average throughput (in blue) and 95th percentile latency (in red) sampled every second using 16 MySQL servers for an increasing number of clients (sysbench threads). For both throughput and latency, except when we’re reaching system saturation, both are pretty stable. In the case of throughput, we can observe a steady throughput even when latency goes above 0.5ms which is expected from an in-memory database. Going above 1.7M QPS Using this Cluster configuration and workload it is possible to go above 1.7M queries per second. It’s also possible to further reduce the variation in measurements by fine-tuning operating system settings. However, this is no longer an easy task requiring experimenting with other configuration parameters, falling outside the scope of this blog. Regardless, to give you a hint about possible next steps, we can start by looking at the output of ndb_top:  This confirms that now the bottleneck of the overall system is the NDB cluster. More precisely, the tc threads are at 80% CPU utilization having reached saturation. Despite that, the other threads are still far away from being saturated which leaves space for further optimization. When enabling AutomaticThreadConfig and configuring NumCPUs=32, NDB will make use of 8 ldm, 8 query, 4 tc, 3 send, 3 recv, and 1 main threads. From the above, we see that the tc threads are saturated but the ldm+query threads are still not being fully utilized. To further try to improve query execution we could manually set the number of threads to use, reducing the number of ldm+query threads and adding a few more tc threads. But that, we’ll leave it for another blog post! Conclusion MySQL NDB Cluster has been developed with the goal of enabling horizontal scaling. However, with the continuous improvement of high-end hardware, it’s important to have a simple way to scale up a database. This blog provides an introductory walk-through on how to scale up MySQL NDB Cluster 8.0.26 in an easy way reporting over 1.7M primary key lookups per second. MySQL NDB Cluster is an open-source distributed in-memory database. It combines linear scalability with high availability, providing in-memory real-time access with transactional consistency across partitioned and distributed datasets. It was developed to support scenarios requiring high-availability (99.999% or more) and predictable query time. Source code and binaries are available from: https://www.mysql.com/products/cluster  

On July 20th, 2021, we’ve celebrated the release of MySQL NDB Cluster 8.0.26. MySQL NDB Cluster (or NDB for short) is part of the MySQL family of open-source products providing an in-memory,...

Community

MySQL: Character Sets, Unicode, and UCA compliant collations

With MySQL 8.0, the version of MySQL Database Service aka MDS, the default character set has changed from latin1 to ut8mb4. The default collation is utf8mb4_0900_ai_ci but what does that mean ? and why are the utf8mb4_0900_* the recommended ones ? Collations like utf8mb4_unicode_520_ci and utf8mb4_0900_ai_ci are based on Unicode Collation Algorithm (UCA). The number in the collation defines the UCA version: UCA 9.0.0 (recommended) - example: utf8mb4_0900_ai_ci UCA 5.2.0 (not recommended, see problems below) - example: utf8mb4_unicode_520_ci The default collation in MySQL 8.0 is utf8mb4_0900_ai_ci Now let's have a look at what those collations are used for. We start with creating a table like this: CREATE TABLE collation_ex ( id int NOT NULL AUTO_INCREMENT, 0900_ai_ci varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL, unicode_520_ci varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci DEFAULT NULL, general_ci varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci We can see that we have 3 varchar columns using different collations where the first 2 are UCA compliant (even if the second is less). Let's see the content of that table: Now let's see the difference when we sort them using those varchar columns: We can see the difference and notice that the sorting using the default MySQL 8.0 collation (utf8mb4_0900_ai_ci, on the first column) is the one giving the correct result. This UCA compliant collation is sorting the different 'a' as 'a'. We can also notice difference in ordering the Japanese characters. For more information about UCA support in MySQL and why the correct results are important take a look at those previous post by my colleagues in the engineering team: Sushi = Beer ?! An introduction of UTF8 support in MySQL 8.0 MySQL 8.0 Collations: The devil is in the details Let's have a look at another example with Cyrillic characters to illustrate the difference between the Unicode 5.2 and 9.0: Of course I could not write a blog post on Charsets and Collations without showing the famous Shushi = Beer example: If you don't remember what _as and _ci mean in the collation name, this means accent sensitive and case insensitive. The accent sensitivity over Unicode charsets can be illustrated for example with some Japanese words where it takes all its importance: I hope this clarifies a bit why using UCA compliant collations (default in MySQL 8.0) is important and why we are focusing on those and not anymore on old utf8 collations that are not UCA compliant. Currently MySQL 8.0 support 49 utf8mb4 collations compatible with UCA 9.0.0. To get the list just run the following statement: mysql> show collation like 'u%900%'; Thank you for using MySQL ! In summary: we recommend using: - utf8mb4_0900_* - utf8mb4_bin please do not use: - utf8_* (no support for Emoji, missing CJK characters, etc) - utf8mb3 (no support for Emoji, missing CJK characters, etc) - utf8mb4_general_ci (have problems like the Sushi-Beer explained above) - utf8mb4_unicode_ci (have problems like the Sushi-Beer explained above) - utf8mb4_unicode_520_ci: (have problems like Mother-Daddy issue, for Japanese as if it matches characters of p sound and b sound and cannot be resolved)

With MySQL 8.0, the version of MySQL Database Service aka MDS, the default character set has changed from latin1 to ut8mb4. The default collation is utf8mb4_0900_ai_ci but what does that mean ? and...

Announcements

Announcing July 2021 Releases featuring MySQL 8.0.26

MySQL Server 8.0.26 and 5.7.35, new versions of the popular Open Source Database Management System, have been released in conjunction with the 8.0.26 Connector and Component products.  We are also pleased to announce the release of MySQL Cluster 8.0.26, the latest GA, along with 7.6.19, 7.5.23, 7.4.33. MySQL Cluster is the distributed, shared-nothing variant of MySQL.  These releases are recommended for use on production systems and are available in source and binary form for a number of platforms from our MySQL Community download pages at: https://dev.mysql.com/downloads/ Windows packages are available via the Installer for Windows or .ZIP (no-install) packages for more advanced needs. The point and click configuration wizards and all MySQL products are available in the unified Installer for Windows: http://dev.mysql.com/downloads/installer/ Enterprise binaries for these new releases are available on My Oracle Support: https://support.oracle.com Choose the “Patches & Updates” tab, and then choose the “Product or Family (Advanced Search)” side tab in the “Patch Search” portlet. MySQL 8.0.26 enterprise releases are also available on the Oracle Software Delivery Cloud: http://edelivery.oracle.com/ Changes in the release since the previous one can be viewed online as shown below. https://dev.mysql.com/doc/#current-release-notes    

MySQL Server 8.0.26 and 5.7.35, new versions of the popular Open Source Database Management System, have been released in conjunction with the 8.0.26 Connector and Component products.  We are also...

Digital14 Relies on MySQL Enterprise Edition for Enhanced Security

Digital14 provides secure solutions, cybersecurity, and digital transformation services to public and commercial sector organizations in the United Arab Emirates. Digital14 supports intelligent and intrinsically secure digital transformations that create great experiences and unlock productivity for enterprise and government alike. The company's unique expertise in cybersecurity enable organizations to innovate with confidence and accelerate growth. As a pioneer in smart cities and Internet of Things (IoT), the UAE is also increasingly vulnerable to the threat of cyber-attacks. Digital14's KATIM platform is engineered to deliver ultra-secure communication and collaboration capabilities via secure network infrastructure, devices, applications, and services to protect its customers from the attacks. The company offers a wide range of customizable applications, including KATIM Messenger, KATIM Mail, and KATIM gateway. In addition, it has developed the ruggedized KATIM business phone and RO1 smartphone designed to withstand extreme field conditions. Digital14 needed to strengthen its solutions with a highly secure and available back-end database to offer maximum security to its customers. It originally chose MySQL Community Edition but after a year upgraded to MySQL Enterprise Edition to benefit from features such as advanced security and professional 24/7 support. Business Challenges & Goals Strengthen backend services for cyber resilient applications offered to public sector customers in the UAE to guarantee consistency, integrity, and high availability. Acquire an online backup and recovery system with close to zero downtime to accelerate secure data migrations and bolster service level agreements. Take advantage of professional support and knowledge sharing to offer cutting edge solutions and counter cybercriminal threats to critical public sector assets. Improve agility and time to market with customized cybersecurity products by adopting best practices for DevOps. Business Results & Metrics Benefited from increased performance, enhanced security features and the professional support endorsed by developers and administrators of end-to-end secure communications solutions by upgrading from MySQL Community Edition to MySQL Enterprise Edition 8.0. Maximized availability of secure applications with MySQL InnoDB Cluster, a group replication infrastructure with no data loss, automated failover, and scale-out for optimizing uptime of public sector cybersecurity and digital transformation solutions. Achieved near zero downtime with MySQL Enterprise Backup, reducing backup time from 4 hours to 5 minutes and decreasing restore time from 18 hours to 14 minutes, enabling much faster security application migrations and more reliable service level agreements. Increased customer trust and peace of mind by deploying advanced MySQL Enterprise Edition security tools including Authentication, Firewall, and Encryption for maximum protection against data breach and cyberattacks on communication, messaging, and conferencing services. Boosted product development and rollout efficiency by leveraging Oracle Premier Support for MySQL for consultancy and knowledge sharing around the optimum design, testing, and implementation of MySQL as the back-end database for digital security applications, devices, and networks. Improved DevOps collaboration, increased agility, and shortened time to market by implementing MySQL Document Store to guide best practices between development and operations teams. Used MySQL Enterprise Edition as the mobile device management back-end for the company's ruggedized smartphones designed to withstand the extreme field conditions experienced by customers working in defense, emergency services, transport, and oil and gas industries. "One of the greatest benefits has been the amazing support from the MySQL team of architects assisting our enterprise-grade solution through consultancy, pilot design, workflow review, and technical support. You just cannot measure the value of that." Said Pawel Grzegrzolka. Read the full case study Download MySQL Enterprise Edition Try MySQL Database Service with HeatWave  

Digital14 provides secure solutions, cybersecurity, and digital transformation services to public and commercial sector organizations in the United Arab Emirates. Digital14 supports intelligent and...

Community

Reminder when using MySQL Shell

If like me you are using MySQL Shell on many different systems and new installations, there are some commands I often use to benefit from my environment. What I really want to enable is the history of course, a nicer prompt and sometimes the default mode I want to use. This blog post is just a little reminder with those commands I usually repeat in all my systems: mysql-js> shell.options.setPersist('history.autoSave', 1) mysql-js> shell.options.setPersist('history.maxSize', 5000) The option above is when using MySQL Shell in JS mode which is the default. To play with options using the same command in every mode (JS, Python, SQL), you can use /option like: \option --persist history.autoSave 1 Then I set the prompt to have something like this: $ cp /usr/share/mysqlsh/prompt/prompt_256pl+aw.json ~/.mysqlsh/prompt.json if you are looking for a font where all icons are correctly rendered, I recommend to use Fantasque+powerline+awesome. You can of course modify the prompt file yourself. Mine is modified to use two lines and display the date and time, like this: Here is my prompt file: prompt.json.zip If you want to use SQL mode by default, you can also set it like this: mysql-js> shell.options.setPersist('defaultMode', 'sql') And finally, if you want to have the credentials saved, on Linux, you must have mysql_config_editor which is distributed with mysql-client package. Thank you for using MySQL Shell !

If like me you are using MySQL Shell on many different systems and new installations, there are some commands I often use to benefit from my environment. What I really want to enable is the history of...

Community

Migrate Ownership of your stored routines, views and triggers for MySQL in 2021

Yesterday my friend and ex-colleague Sveta published a blog post on how to migrate ownership of stored routines, views and triggers in MySQL. I usually agree a lot with her and even if I agree with what she wrote, there is one single point I would like to comment: use MySQL Shell ! In the blog post, Sveta recommends the use of mysqldump which I don’t. For any logical dump & load operations, I really recommend the use of MySQL Shell’s Dump & Load Utility ! It’s much faster, it has many nice options like hidden Primary Key creation, and is compatible with OCI. And of course, MySQL Shell also provides a solution for the problem exposed in the blog post. Let’s see MySQL Shell in action with the exact same example: First, using my own user (fred), I create a database and a view: mysql> create database definers; mysql> use definers; mysql> CREATE VIEW large_tables AS SELECT * FROM information_schema.tables WHERE DATA_LENGTH > 100000000; mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables'; +---------+--------------+--------------+ | DEFINER | TABLE_SCHEMA | TABLE_NAME | +---------+--------------+--------------+ | fred@% | definers | large_tables | +---------+--------------+--------------+ Now, I create the production user, for obvious security reasons, I assign a password to it: mysql> CREATE USER production identified by '********'; mysql> GRANT ALL ON definers.* TO production@'%'; mysql> GRANT SESSION_VARIABLES_ADMIN ON . TO production@'%'; We can now dump the definers schema using MySQL Shell where I will use fred to connect: [fred@imac ~] $ mysqlsh fred@localhost MySQL  localhost:33060+   2021-07-08 07:41:47  JS  util.dumpSchemas(['definers'], '/tmp/dump') Acquiring global read lock Global read lock acquired Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing DDL for schema definers Writing DDL for view definers.large_tables ?% (0 rows / ?), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed Duration: 00:00:00s Schemas dumped: 1 Tables dumped: 0 Uncompressed data size: 0 bytes Compressed data size: 0 bytes Compression ratio: 0.0 Rows written: 0 Bytes written: 0 bytes Average uncompressed throughput: 0.00 B/s Average compressed throughput: 0.00 B/s And now I will load it (the view needs to be removed first if you load it on the same server) using the production user: [fred@imac ~] $ mysqlsh production@localhost MySQL  localhost:33060+   2021-07-08 07:42:53  JS  util.loadDump('/tmp/dump') Loading DDL and Data from '/tmp/dump' using 4 threads. Opening dump… Target is MySQL 8.0.25. Dump was produced from MySQL 8.0.25 Checking for pre-existing objects… Executing common preamble SQL Executing DDL script for schema definers [Worker002] Executing DDL script for definers.large_tables (placeholder for view) Executing DDL script for view definers.large_tables ERROR: Error executing DDL script for view definers.large_tables: MySQL Error 1227 (42000): Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation: /*!50001 CREATE ALGORITHM=UNDEFINED DEFINER=fred@% SQL SECURITY DEFINER VIEW large_tables AS select information_schema.tables.TABLE_CATALOG AS TABLE_CATALOG,information_schema.tables.TABLE_SCHEMA AS TABLE_SCHEMA,information_schema.tables.TABLE_NAME AS TABLE_NAME,information_schema.tables.TABLE_TYPE AS TABLE_TYPE,information_schema.tables.ENGINE AS ENGINE,information_schema.tables.VERSION AS VERSION,information_schema.tables.ROW_FORMAT AS ROW_FORMAT,information_schema.tables.TABLE_ROWS AS TABLE_ROWS,information_schema.tables.AVG_ROW_LENGTH AS AVG_ROW_LENGTH,information_schema.tables.DATA_LENGTH AS DATA_LENGTH,information_schema.tables.MAX_DATA_LENGTH AS MAX_DATA_LENGTH,information_schema.tables.INDEX_LENGTH AS INDEX_LENGTH,information_schema.tables.DATA_FREE AS DATA_FREE,information_schema.tables.AUTO_INCREMENT AS AUTO_INCREMENT,information_schema.tables.CREATE_TIME AS CREATE_TIME,information_schema.tables.UPDATE_TIME AS UPDATE_TIME,information_schema.tables.CHECK_TIME AS CHECK_TIME,information_schema.tables.TABLE_COLLATION AS TABLE_COLLATION,information_schema.tables.CHECKSUM AS CHECKSUM,information_schema.tables.CREATE_OPTIONS AS CREATE_OPTIONS,information_schema.tables.TABLE_COMMENT AS TABLE_COMMENT from information_schema.TABLES tables where (information_schema.tables.DATA_LENGTH > 100000000) */ Util.loadDump: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation (MYSQLSH 1227) As you can see, it failed the exact same way as it happened with mysqldump and this is because by default we don’t enable the strip_definers option. Now we will do the dump again enabling the option this time: JS  util.dumpSchemas(['definers'], '/tmp/dump', {compatibility:["strip_definers"]}) Acquiring global read lock Global read lock acquired Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing DDL for schema definers Writing DDL for view definers.large_tables NOTE: View definers.large_tables had definer clause removed and SQL SECURITY characteristic set to INVOKER ?% (0 rows / ?), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed Duration: 00:00:00s Schemas dumped: 1 Tables dumped: 0 Uncompressed data size: 0 bytes Compressed data size: 0 bytes Compression ratio: 0.0 Rows written: 0 Bytes written: 0 bytes Average uncompressed throughput: 0.00 B/s Average compressed throughput: 0.00 B/s And we can see that the dump is now working as expected: JS  util.loadDump('/tmp/dump') Loading DDL and Data from '/tmp/dump' using 4 threads. Opening dump… Target is MySQL 8.0.25. Dump was produced from MySQL 8.0.25 Checking for pre-existing objects… Executing common preamble SQL Executing DDL script for schema definers [Worker002] Executing DDL script for definers.large_tables (placeholder for view) Executing DDL script for view definers.large_tables Executing common postamble SQL No data loaded. 0 warnings were reported during the load. And if we check the definer this time of the loaded view: mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables'; +--------------+--------------+--------------+ | DEFINER | TABLE_SCHEMA | TABLE_NAME | +--------------+--------------+--------------+ | production@% | definers | large_tables | +--------------+--------------+--------------+ 1 row in set (0.00 sec) As you can see, MySQL Shell Dump & Load Utility is the way to go for any logical dump and load: faster and powerful ! Thank you for using MySQL !

Yesterday my friend and ex-colleague Sveta published a blog post on how to migrate ownership of stored routines, views and triggers in MySQL. I usually agree a lot with her and even if I agree with...

MySQL Database Service

MySQL Database Service with HeatWave now available in 2 cloud regions in Brazil

MySQL Database Service with HeatWave is now available in Brazil Southeast (Vinhedo) and Brazil East (Sao Paulo) regions. With this addition, Oracle Cloud Infrastructure customers have more flexibility to deploy MySQL DB Systems closer to their applications, and it is now possible to select any of the 23 Oracle Cloud Infrastructure regions: North America  US East (Ashburn)  US West (Phoenix)  US West (San Jose)  Canada Southeast (Toronto)  Canada Southeast (Montreal) Latin America  Brazil East (Sao Paulo)  Brazil Southeast (Vinhedo)  Chile Central (Santiago) EMEA  UK South (London)  UK West (Newport)  Germany Central (Frankfurt)  Switzerland North (Zurich)  Netherlands Northwest (Amsterdam)  Saudi Arabia West (Jeddah)  UAE East (Dubai) APAC  Japan East (Tokyo)  Japan Central (Osaka)  South Korea Central (Seoul)  South Korea North (Chuncheon)  Australia East (Sydney)  Australia Southeast (Melbourne)  India West (Mumbai)  India South (Hyderabad)     You are welcome to try MySQL Database Service and many other services for free. Go to oracle.com/cloud/free and create your free account in Oracle Cloud Infrastructure. Check the documentation for the quick steps to create your MySQL databases.  Visit oracle.com/mysql to learn more about MySQL Database Service and HeatWave.

MySQL Database Service with HeatWave is now available in Brazil Southeast (Vinhedo) and Brazil East (Sao Paulo) regions. With this addition, Oracle Cloud Infrastructure customers have more flexibility...

Community

MDS, PHP and authentication

Recently I blogged about how to easily deploy a LAMP application to MDS. Using the Terraform module (OCI Resource Manager’s Stack) you also have the possibility to choose the PHP version to install: But as you should already know, not all versions support the latest default authentication method in MySQL 8.0: caching_sha2_password [1], [2], [3]. Most of the PHP applications are using mysqli and mysqlnd as connector. Let’s have a look at a summary of which combinations are available and compatible with MySQL Database Service: We can see that the default settings of MDS are compatible with PHP 7.4 and 8.x. But in case you are using an older version of PHP (not older than 7.1), your application will be perfectly able to use MDS but the MySQL user used by the application will have to use the mysql_native_password authentication plugin. It’s almost the same if you are using PDO with mysqlnd. If you are using mysql_xdevapi as connector, this one works with all versions from 7.1. This is an example on how to create a user with this previous authentication plugin: SQL > create user test_php identified with 'mysql_native_password' by 'TestPassw0rd!'; If you don’t do it, this is the error you will get in your PHP application using PHP <7.4 with MySQL 8.0 and MDS using the default authentication settings: The server requested authentication method unknown to the client [caching_sha2_password] in test.php on line 2 mysqli::__construct(\): \(HY000/2054): The server requested authentication method unknown to the client in test.php on line 2 In MDS, you can also change this default authentication method to the previous native plugin. To achieve that you will have to create a new MDS configuration and set the user variable default_authentication_plugin to mysql_native_password as shown below: And of course use that new configuration with your MDS instance. Happy PHP development with MySQL !

Recently I blogged about how to easily deploy a LAMP application to MDS. Using the Terraform module (OCI Resource Manager’s Stack) you also have the possibility to choose the PHP version to install: But...

Announcements

Where’s the MySQL team from July - September 2021

As a continue of the previous announcement, please find below the list of shows where you can find MySQL Community and/or the MySQL team during the time of July - September 2021: July 2021: MySQL meetup / Virtual, July 7, 2021 Carsten Thalheimer and Abdullah Zarour, the MySQL Principal Solution Engineers will be talking about Oracle Cloud - MySQL Database Service & HeatWave (for Real-time Analytics).  Scheduled for 5:00 PM to 7:00 PM GMT+4. Please make sure you have zoom registration on the url above. Hong Kong Open Source Conference / Virtual, July 17, 2021 Do not miss following MySQL talks already accepted for OS HK as follows: Ryusuke Kajiyama, the Open Source Solution Specialist on "Is your MySQL the Genuine MySQL". Talk scheduled for 14:35-15:05. Ivan Ma, the MySQL Master Principal Solution Engineer on "Sneak Preview on MySQL Operator for Kubernates". Talk scheduled for 15:20-15:35. Ivan Tu, the MySQL Solution Engineering Manager on "Enhance Grafana performance with MySQL Database Service and Heatwave". Talk scheduled for 15:55-16:05.  COSCUP / Virtual, July 31-August 1, 2021 At COSCUP this year we again have a MySQL track with following accepted talks: "Closer look at MySQL Replication" – Ivan Ma, the MySQL Master Principal Solution Engineer "MySQL Window Function, a practice guide" - John Liu   "MySQL HA Solutions facilizing 7X24 available application" – Andy Hsu   "MySQL Ultimate Fortification (Fully Automated MHA Mechanism)" – PINK   "InnoDB Cluster on Kubernetes with new MySQL Operator" – Hanato Wicaksono, MySQL Principal Solution Engineer  "MySQL 8.0 for Developer" – Ivan Tu, the MySQL Solution Engineering Manager "About MySQL HA" - Andy Hsu The full schedule is not yet finalized, please watch the organizers' website for getting the correct timing. August 2021: FrOSCon / Virtual, August 21-23, 2021 Carsten Thalheimer, the MySQL Principal Solution Engineer will talk on "MySQL Shell: A DevOps-engineer day with MySQL's development". The talk is scheduled for August 23 @13:00. David Stokes, the MySQL Community Manager will most probably have a speaking slot on "Covid Cording or Rediscovering the Guitar When Stuck at Home". Timing is still TBC. Devoxx PL, Krakow, Poland / Physical event, August 25-27, 2021 Vittorio Cioe, the MySQL Solution Engineer & Sales Consulting, will be talking on "Build your future proof solutions with MySQL Database Service and Heatwave." Timing of the speaking slot is not yet set, please watch organizer's website for further updates. MySQL is going to also have a physical booth and we hope to be able to physically staff it. If for any reason this would not be possible, we will switch into virtual world and run all virtually. Open Developer Conference / Virtual, August 28, 2021 Yoshiaki Yamasaki, the MySQL Principal Solution Engineer will have a 45 mins talk. Please watch the organizers' website for further updates. September 2021: Kansas City Developers Conference - KCDC / Physical event, September 15-17, 2021 We are still in the process of being part of this show. Our submitted talk needs to be accepted first. DataCon LA / Virtual, September 17-19, 2021 We are still in the process. Our talk needs to be accepted first.

As a continue of the previous announcement, please find below the list of shows where you can find MySQL Community and/or the MySQL team during the time of July - September 2021: July 2021: MySQL...

May 20 - Live Webinar - What's New in MySQL up to 8.0.24 and 8.0.25

Join us on May 20, 2021 at 9AM PST for a live webinar focused on the new MySQL 8.0 features. The webinar will cover the key new features introduced in MySQL 8.0 up to the recently released 8.0.24 and 8.0.25. The latest versions introduce several new improvements and updates for Optimizer, GIS, InnoDB, Performance schema, Replication, Group Replication, Router, MTR Testsuite, and more. The webinar will highlight the key benefits of each and best practices on how to use them. Register now Date: May 20, 2021 @9AM PST Speaker: Harsh Nayak    Additional Resources: Top MySQL 8.0.24 blogs: Overview: The MySQL 8.0.24 Maintenance Release is Generally Available MySQL Shell 8.0.24 Dump now supports array arguments in non-interactive mode MySQL Shell 8.0.24 – What’s New? MySQL Cluster 8.0.24, 7.6.18, 7.5.22, 7.4.32, and 7.3.33 MySQL Workbench 8.0.24  MySQL Shell 8.0.24 for MySQL Server 8.0 and 5.7  Announcing MySQL Server 8.0.24 and 5.7.34 MySQL 8.0.24: thank you for the contributions Connectors SSH Tunneling in Connector/Net MySQL Connector/J 8.0.24  MySQL Connector/C++ 8.0.24  MySQL Connector/NET 8.0.24  MySQL Connector/Node.js 8.0.24  MySQL Connector/ODBC 8.0.24   MySQL Connector/Python 8.0.24    Top MySQL 8.0.25 blogs: Announcing MySQL Cluster 8.0.25 Announcing MySQL Server 8.0.25 MySQL Shell 8.0.25 for MySQL Server 8.0 and 5.7 MySQL Workbench 8.0.25  Connectors: MySQL Connector/NET 8.0.25  MySQL Connector/J 8.0.25 MySQL Connector/Python 8.0.25  MySQL Connector/ODBC 8.0.25 MySQL Connector/C++ 8.0.25  MySQL Connector/Node.js 8.0.25      

Join us on May 20, 2021 at 9AM PST for a live webinar focused on the new MySQL 8.0 features. The webinar will cover the key new features introduced in MySQL 8.0 up to the recently released 8.0.24 and...

MySQL Database Service

New Data Import feature in MySQL Database Service

The MySQL Engineering team is proud to announce the Data Import feature in MySQL Database Service (MDS) and HeatWave. The new Data Import enables faster and easier migrations to the MySQL Database Service. You can migrate from virtually any MySQL source, like AWS RDS/Aurora or MySQL on-premise, and from different versions, including 8.0, 5.7, or 5.6, directly to MySQL Database Service. To migrate your existing MySQL databases, you perform two easy steps, in sequence or in parallel: Use MySQL Shell to export data from an existing MySQL source into an OCI Object Storage bucket; In MDS, create a new MySQL DB System referencing the bucket using a Pre-Authenticated Request (PAR). The data is imported automatically! Together with MySQL Shell, MDS Data Import functionality dramatically reduces the time and complexity of migrations. MySQL Shell for the data dump The dump process can be highly parallelized using multiple threads. Only a single command is required to dump data into OCI Object Storage. Compatibility transformations are available if necessary.  MDS built-in Data Import Automatically sets MySQL configurations for high-performance data ingestion. Directly accessible in the web Console while creating a new DB System. Generates the Object Storage PAR with a single click if necessary. Can start to load data into MDS even if the dump is still in progress. Ensures the data load is complete and correct before you start using the DB System. The best part of all is that you don’t need to think about it. MySQL Database Service handles the complexity for you. To get started, we recommend importing an existing MySQL database into the MySQL Database Service. We also recommend trying HeatWave for 400x query acceleration for MySQL. You can go to oracle.com/cloud/free and create your free account in Oracle Cloud Infrastructure. Check the documentation for quick steps to create your first MySQL DB System with HeatWave and importing and exporting data.

The MySQL Engineering team is proud to announce the Data Import feature in MySQL Database Service (MDS) and HeatWave. The new Data Import enables faster and easier migrations to the MySQL Database...

The BBC ensures world-class broadcasting services with MySQL Enterprise Edition

The BBC delivers resilience of broadcasting services across TV, radio, and digital platforms using MySQL InnoDB Cluster. Kim Cassells, Senior Database Systems Engineer, BBC said, “MySQL InnoDB Cluster provides us with the high availability, fault tolerance, and scalability that are critical for delivering modern broadcasting services. We also get advice directly from the MySQL engineers and can monitor our applications proactively.”   She also concluded,“Our success in deploying MySQL InnoDB Cluster at the BBC is extremely encouraging. We can offer this high availability solution now to other broadcast-critical operations within the BBC, such as the automation of the program-making galleries and studios in the BBC Broadcasting House in London.  It's an exciting time ahead for us.”   Watch the full BBC session at our latest MySQL Day - UK Video   Business Challenges & Goals The BBC is the world’s leading public service news organisation. BBC Wales ensures news and content production in English and Welsh for 3 TV stations, 25 radio stations, and numerous online channels. It is a pivotal site for the BBC. The BBC needed to modernize its media production environment and further enhance production quality, high availability and on-demand scalability of its critical broadcasting applications. Those applications are dedicated to automating studio production systems, audio and video exports, and optimizing graphic engine routes.   Business Results & Metrics   Multiple MySQL InnoDB Cluster nodes were deployed in separate fault zones in the Cardiff datacentre. This provided enhanced efficiency, resiliency, and scalability to BBC Wales’ broadcasting services which are consumed round-the-clock by viewers in the United Kingdom and worldwide on television, radio, and digital platforms.  MySQL’s InnoDB Cluster technology gave the BBC a native, perfectly integrated, high availability solution for managing critical broadcasting services without having to rely on any external component or application failover logic. Failovers and data replication to all nodes are automatic. “The MySQL InnoDB Cluster configures itself automatically to determine which MySQL instance will act as the primary database node. Nobody notices anything: there is no service outage and no need for manual intervention,” Kim Cassells said. The BBC’s DBAs have gained full visibility of potential issues in each node and the overall cluster with the help of MySQL Enterprise Monitor. The tool provides in-depth advice and alerts the DBAs on problems before these become critical. It also makes maintenance easier, enabling the DBA team to focus on core tasks such as process documentation instead of managing the clusters.  The BBC now has direct 24x7 access to experienced MySQL engineers and technical support to assist with the deployment, administration, and monitoring of their MySQL databases and applications, enabling the DBA team to adopt a more proactive approach for managing databases and applications.  The BBC now has the flexibility to scale MySQL on-demand, by adding more cluster nodes, thus ensuring sustainable performance of the organisation’s broadcasting services, in line with the requirements of increasing data loads. This allows producers to constantly expand on the content they create.   

The BBC delivers resilience of broadcasting services across TV, radio, and digital platforms using MySQL InnoDB Cluster. Kim Cassells, Senior Database Systems Engineer, BBC said, “MySQL InnoDB Cluster...

Webinar: MySQL Enterprise Edition Apr 29

Join us on Thursday April 29th, 2021 @10:00 CET (EMEA) for a live webinar that will cover the new features of MySQL Enterprise Edition.   Who's the event for?  Your applications manage personal data: name, account numbers, passwords  You belong to an industry or region that is subject to data protection requirements You use 3rd party components for authentication, auditing, & key management You're an ISVs and SaaS organization that manages sensitive customer data You're an existing MySQL Community Edition user Learn how MySQL Enterprise Edition will save you time and give you piece of mind by increasing the protection of your critical data, the prevention of data breaches and by helping you meet regulator requirements. MySQL provides advance security features to protect the privacy of information, prevents data breaches and helps organizations meet regulatory requirements such as GDPR, PCI, HIPAA and numerous others.     See how, by registering to the webinar today: Register now.   Among the MySQL Enterprise edition features covered are: MySQL Enterprise Transparent Data Encryption (TDE) MySQL Enterprise Transparent Data Encryption (TDE) enables data-at-rest encryption by encrypting the physical files of the database. Data is encrypted automatically, in real time, prior to writing to storage and decrypted when read from storage. New! MySQL Enterprise Masking and De-identification MySQL Enterprise Masking and De-identification provides an easy to use, built-in database solution to help organizations protect sensitive data from unauthorized uses by hiding and replacing real values with substitutes. MySQL Enterprise Backup MySQL Enterprise Backup reduces the risk of data loss by delivering online "Hot" backups of your databases. It supports full, incremental and partial backups, Point-in-Time Recovery and backup compression. MySQL Enterprise High Availability MySQL InnoDB Cluster delivers an integrated, native, HA solution for your databases. It tightly integrates MySQL Server with Group Replication, MySQL Router, and MySQL Shell, so you don’t have to rely on external tools, scripts or other components. MySQL Enterprise Scalability MySQL Enterprise Scalability enables you to meet the sustained performance and scalability requirements of ever increasing user, query and data loads. MySQL Thread Pool provides an efficient, thread-handling model designed to reduce overhead in managing client connections, and statement execution threads. MySQL Enterprise Authentication MySQL Enterprise Authentication provides ready to use external authentication modules to easily integrate MySQL with existing security infrastructures including PAM and Windows Active Directory. MySQL users can be authenticated using Pluggable Authentication Modules ("PAM") or native Windows OS services. MySQL Enterprise Encryption MySQL Enterprise Encryption provides encryption, key generation, digital signatures and other cryptographic features to help organizations protect confidential data and comply with regulatory requirements. MySQL Enterprise Firewall MySQL Enterprise Firewall guards against cyber security threats by providing real-time protection against database specific attacks, such as an SQL Injection. MySQL Enterprise Firewall monitors for database threats, automatically creates an allowlist of approved SQL statements and blocks unauthorized database activity. MySQL Enterprise Audit MySQL Enterprise Audit enables you to quickly and seamlessly add policy-based auditing compliance to new and existing applications. You can dynamically enable user level activity logging, implement activity-based policies, manage audit log files and integrate MySQL auditing with Oracle and third-party solutions. MySQL Enterprise Monitor The MySQL Enterprise Monitor and the MySQL Query Analyzer continuously monitor your databases and alerts you to potential problems before they impact your system. It's like having a "Virtual DBA Assistant" at your side to recommend best practices to eliminate security vulnerabilities, improve replication, optimize performance and more. As a result, the productivity of your developers, DBAs and System Administrators is improved significantly. Oracle Enterprise Manager for MySQL The Oracle Enterprise Manager for MySQL provides real-time monitoring and delivers comprehensive performance, availability and configuration information for your MySQL databases. MySQL Router MySQL Router is lightweight middleware that provides transparent routing between your application and any backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers. MySQL Workbench MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. It provides data modeling, SQL development, database migration and comprehensive administration tools for server configuration, user administration, and much more. MySQL Technical Support MySQL Technical Support Services provide 24x7 direct access to our expert MySQL Support engineers who are ready to assist you in the development, deployment, and management of MySQL applications.   Learn more about MySQL Enterprise Edition Download the MySQL Enterprise Edition Whitepaper

Join us on Thursday April 29th, 2021 @10:00 CET (EMEA) for a live webinar that will cover the new features of MySQL Enterprise Edition.   Who's the event for?  Your applications manage personal data:...

MySQL Database Service

MySQL Database Service with High Availability

MySQL Database Service is now easier to use and has flexible deployment options, including High Availability. While creating a new DB System using the web console, users can choose between Standalone, Highly Availability, and HeatWave deployments. The service will pre-populate the necessary fields with the recommended settings. You only need to provide the MySQL user name and password to get MySQL up and running. With the Standalone option, users get a single-instance MySQL DB System back-ended by the resilient and secure OCI Block Volumes. This option is the most common for test and development environments. The High Availability option enables applications to meet higher uptime requirements and zero data loss tolerance. When you select the High Availability option, a MySQL DB System with three instances is provisioned across different availability or fault domains. The data is replicated among the instances using a Paxos-based consensus protocol implemented by the MySQL Group Replication technology. Your application connects to a single endpoint to read and write data to the database. In case of failure, the MySQL Database Service will automatically failover within minutes to a secondary instance without data loss and without requiring to reconfigure the application. See the documentation to learn more about MySQL Database Service High Availability.  The HeatWave option deploys a pre-configured DB System to allow accelerated query processing, suitable for running both OLTP and OLAP workloads. With its innovative in-memory analytics engine, HeatWave provides 400x query acceleration for MySQL at a very compelling price.  If you have an existing MySQL application experiencing slow queries, you can also request a Performance Health Check. A MySQL Technician will work closely with your team to run your queries using MySQL Database Service with HeatWave to verify the performance improvements. You can try MySQL Database Service and many other services for free. Go to oracle.com/cloud/free and create your free account in Oracle Cloud Infrastructure. Check the documentation for the quick steps to create your MySQL databases.  

MySQL Database Service is now easier to use and has flexible deployment options, including High Availability. While creating a new DB System using the web console, users can choose between Standalone,...

Announcements

Where’s the MySQL team from April - June 2021 (updated, May 20, 2021)

Please find below the list of shows where you can find MySQL Community and/or the MySQL team at during April to June 2021: April 2021: Data Love Conference/VIRTUAL, April 16, 2021 David Stokes, the MySQL Community Manager is going to talk about "Windowing Functions for Database Analytics", for the exact timing please check organizers' website. MidwestPHP/VIRTUAL, April 22-23, 2021 David Stokes, the MySQL Community Manager is going to talk about "MySQL 8.0 New Features". The talk is scheduled for 1:50pm-2:30pm on April 22nd.  May 2021: PerconaLive/VIRTUAL, May 12-13, 2021 This year MySQL is going to have MySQL track(s) with multiple talks given by MySQL development, Community & technical team members. For the schedule & talk details, (currently under review) please visit Percona's website. DOAG Databanken/VIRTUAL, May 17-18, 2021 Carsten Thalheimer, the Master Principal Sales Consultant is going to have a MySQL talk on "Vortrag: Dinge, die ich hätte wissen sollen, bevor ich MySQL installiert habe…", the talk is scheduled for May 18, 2021 @11:00-11:45 am, see in the DOAG schedule. Conf42: Python 2021/VIRTUAL, May 27, 2021 David Stokes, the MySQL Community Manager will be talking on "Python and the New MySQL Shell".  Open Source Conference (OSC) Nagoya/VIRTUAL, May 29, 2021 Yoshiaki Yamasaki, the MySQL Principal Solution Engineer, Japan & Asia Pacific & local team are going to run MySQL session. Please watch organizers’ website for further details. June 2021: OpenJS World 2021/VIRTUAL, June 2, 2021 David Stokes, the MySQL Community Manager will be speaking on Javascript Mode and the MySQL Shell topics.  DutchPHP/VIRTUAL, June 17, 2021 David Stokes, the MySQL Community Manager will run 3 hours workshop on "MySQL 8.0 Indexes, Histograms, and Other Ways to Speed Up Your Queries". The workshop is planned for June 17@14:00-17:00. OpenExpo Europe/VIRTUAL, June 17-21, 2021 As a partner of OpenExpo, MySQL team will run 2 presentations, one as part of OpenExpo conference and second before June 2021. We are working on the content and timing. Please watch organizers' website for further updates. NEW OpenSource Summit, June 22-25, 2021 David Stokes, the MySQL Community Manager will be talking on "JSON Within A Relational Database". The talk is scheduled for June 23 @11:00-11:55am. See full agenda here. Open Source Conference (OSC) Hokkaido/VIRTUAL, June 26, 2021 Same as for OSC Nagoya, Yoshiaki Yamasaki, the MySQL Principal Solution Engineer, Japan & Asia Pacific & local team are going to run MySQL talk. Please watch organizers’ website for further details. Conferences where our involvement still needs to be confirmed: SouthEast Linux Fest (SELF), US, Date: June 11-13, 2021 Hong Kong OS Conference, Date: June 2021 RootConf, India, Date: June 2021

Please find below the list of shows where you can find MySQL Community and/or the MySQL team at during April to June 2021: April 2021: Data Love Conference/VIRTUAL, April 16, 2021 David Stokes, the...

Analyst Report: Oracle Cranks up the Heat in the MySQL Cloud Market

Bringing High Performance Analytics to MySQL                                                                                                                                                                                                                              Author: Tony Baer, Principal, dblnsight                                                              Main takeaway:  “The MySQL landscape needed a shakeup. Until now, it was considered the default go-to open source database for online transaction processing (OLTP) for the scenarios not requiring the more sophisticated capabilities of “enterprise” databases. And, owing to lack of features such as materialized views, MySQL has not been seriously considered for analytics/data warehousing use cases. However, given the popularity of MySQL with its large skills base, there was a vacuum waiting to be filled. Oracle has been actively developing MySQL for over a decade, but in the MySQL cloud ecosystem, Oracle was overshadowed by cloud providers offering their own MySQL DBaaS services. To enter a MySQL DBaaS market that was already pretty crowded, Oracle would have to sharply differentiate itself. Oracle’s new “HeatWave” engine does just that, turning up the heat on rival cloud MySQL services by not only adding analytics to the mix, but incorporating hardware and software optimizations that significantly accelerate performance. With the new service, MySQL customers can use the same familiar database for their OLTP and OLAP workloads, while eliminating the cost, complexity, and risk of the ETL processes that are required when using separate databases. Aggressively priced, Oracle MySQL Database Service customers should get a good bang for their buck.”     Additional highlights include: “Oracle MySQL Database Service, with its integrated HeatWave engine, is the only MySQL cloud service that supports OLTP and OLAP workloads.” “The combination of conventional in-memory columnar, micro partitioning, and vector processing makes HeatWave unique.” “Oracle has run its own TPC-H benchmark comparing MySQL Database Service with HeatWave vs. Amazon Redshift. This is not an opaque benchmark; Oracle encourages customers to run the same benchmark for themselves, making all the scripts publicly available on GitHub.” “Oracle is not simply adding just another me-too MySQL cloud service to an already-crowded market. It is differentiating it, both in features and price.” “The big benefit is with transaction and analytic workloads relying on the same data.Because data stays within the same database, it dispenses with the need for ETL processes.” “HeatWave is a versatile engine for analytics, enabling in-memory vectorized processing, massive inter-and intra-node parallelism, and distributed query processing; the engine chooses the optimal path for the query.” “Aggressively priced, Oracle MySQL Database Service customers should get a good bang for their buck.” Read the full report   Additional references: Learn more about HeatWave (videos, testimonials, related blogs, Larry’s quote) Learn more about MySQL Database Service Try it free     

Bringing High Performance Analytics to MySQL                                                                                                                                                            ...

MySQL HeatWave: Your Reference Guide and Webinar

MySQL Database Service with HeatWave: 1100x Faster than Amazon Aurora, 400x faster than Amazon RDS, 18x faster than Amazon Redshift at ⅓ the cost. This blog post will guide you through the MySQL HeatWave references that are available today, from live webinar, to recorded videos, blogs, and more. Let's start with a quick definition of what is MySQL HeatWave. HeatWave is a new, in-memory query accelerator for MySQL Database Service available only in the Oracle Cloud. It can accelerate performance on large multi-TB datasets, and scale across 1,000s cores. Our benchmark results and customer feedback show that HeatWave accelerates MySQL queries by 400X and is 1/3 the cost of Amazon Redshift. Oracle MySQL Database Service, with HeatWave, is the only service that enables database admins and app developers to run OLTP and OLAP workloads directly from their MySQL database, eliminating the need for complex, time-consuming, and expensive data movement and integration with a separate analytics database.    Learn more in this on-demand webinar: Speaker: Nipun Agarwal, Vice President, Research & Advanced Development Title: 400x Query Acceleration for MySQL with HeatWave Description: Learn how you can leverage HeatWave to accelerate your MySQL queries. Watch it now   Query Performance Health Check - The HeatWave Challenge Do you want to improve the performance of your slow queries? Sign up today for a HeatWave performance Heath Check.  As part of the Performance Health Check, a MySQL Technician will work closely with your team to run your queries using MySQL Database Service with HeatWave and share the performance improvements. Sign up Videos: A 5 minute demo of the MySQL Database Service with HeatWave analytics engine.    A 30 minute video from the launch of MySQL Database Service with HeatWave analytics engine.     Recent blogs about HeatWave: Breakthrough Enhancements in MySQL Database Service with Analytics Engine HeatWave for MySQL – Technical Deep Dive MySQL HeatWave: 1100x Faster than Aurora, 400x than RDS, 18x than Redshift at ⅓ the cost Introducing MySQL Database Service and MySQL Analytics Engine   Other resources: Learn more about HeatWave on our website  Read the technical brief See the performance comparison and benchmarks  Peruse the documentation Ask your questions in our community forum Get MySQL Database Service Training and Certification   As a dolphin, Sakila is a fast swimmer, but when on its HeatWave surfboard, it is the fastest! There is no match when it comes to surfin’ data waves. Try it free     Customer quotes:    

MySQL Database Service with HeatWave: 1100x Faster than Amazon Aurora, 400x faster than Amazon RDS, 18x faster than Amazon Redshift at ⅓ the cost. This blog post will guide you through the...

MySQL HeatWave: 1100x Faster than Aurora, 400x than RDS, 18x than Redshift at 1/3 the cost

HeatWave is designed to enable customers to run analytics on data which is stored in MySQL databases without the need for ETL. This service is built on an innovative, in-memory analytics engine which is architected for scalability and performance and is optimized for Oracle Cloud Infrastructure (OCI) Gen 2 hardware. This results in a very performant solution for SQL analytics at a fraction of the cost compared to other cloud services including AWS Aurora, Redshift, Google Big Query, RDS.   The amount of acceleration an application would observe with HeatWave depends upon a number of factors like the datasize, queries, operators being used in the query, the selectivity of the predicates. For the purpose of comparing, we are considering the TPCH benchmark which has the queries well defined and the only variable is the data size and the system configuration. HeatWave is able handle all workloads with a single shape so that significantly simplifies the choice for the customer.  400x Query Acceleration for MySQL  The first comparison we make is with MySQL database which is representative of MySQL running on various cloud platforms or various flavors of MySQL. For 400G datasize, using the same number of cores and the same amount of DRAM for MySQL, HeatWave accelerates performance by 400x times for analytics workloads like TPCH. Furthermore, there is no need to create any indexes with HeatWave.   Figure 1. HeatWave accelerates MySQL queries by 400x   1100x Faster than Aurora, 3x cheaper  The next comparison we show is with Amazon Aurora, which is Amazon’s premium database service. HeatWave offers dramatic improvement in performance for complex and analytic queries. For a 4TB TPC-H workload, MySQL HeatWave is 1100x faster than Amazon Aurora. Furthermore, there is no need to create indexes on the base table which takes over 5 days with Amazon Aurora compared to under 4 hours to load data in HeatWave. As a result, the data is available to query much sooner than with Aurora. Furthermore, the cost is less than 1/3 of Aurora. Figure 2. HeatWave is 1100x faster and less than 1/3 the cost of Aurora The performance improvement of MySQL Database Service with HeatWave over Aurora increases with the size of data. Figure 3. The performance advantage of HeatWave increases with data size vs. Amazon Aurora 17x Faster than Redshift, 3x Cheaper  Next, we compare with Amazon Redshift which is designed for analytics and is offered in multiple shapes. Compared to the fastest shape (dc2.8xlarge), HeatWave is up to 3x faster and 1/3 the cost. For HeatWave, the cost includes both OLTP and OLAP capabilities while for Redshift the additional cost of the OLTP system and the cost of ETL from the OLTP database to Redshift is not included.  Figure 4. HeatWave is 2.7x faster and 1/3 the cost of Amazon Redshift’s fastest shape. Compared to the cheaper shape of Redshift (RA3.4xLarge), HeatWave is up to 18x faster and 3% less expensive. Unlike Redshift, HeatWave is capable of running both OLTP and OLAP wokloads, without the need for ETL. With Redshift listed cost is only for OLAP, and additional costs are needed for the OLTP database. Figure 5. HeatWave is 17.7x faster and cheaper than Amazon Redshift’s cheaper shape   Customers who use HeatWave will benefit from significantly better performance, eliminating the need for ETL, support for real-time analytics, reduced monthly cost and a single database for OLTP and OLAP.   Conclusion HeatWave is a cloud native service which is exclusively available in Oracle cloud Infrastructure and provides compelling performance and cost for analytic workloads. Organizations using MySQL database for managing their enterprise data can now run analytic queries with HeatWave with significantly better performance, lower cost, not requiring ETL and support for real- time analytics in contrast to other database services like RDS, Google Big Query, Snowflake, Aurora and Redshift. The service can be deployed in a cloud only or in a hybrid environment, and it simplifies management for both transactional and analytic applications. We welcome you to try this service for free: https://www.oracle.com/cloud/free/ Learn more about HeatWave: https://www.oracle.com/mysql/heatwave/

HeatWave is designed to enable customers to run analytics on data which is stored in MySQL databases without the need for ETL. This service is built on an innovative, in-memory analytics engine...

Integrating Any Type of Data with MySQL Database Service and HeatWave

  Oracle Cloud Infrastructure Data Integration with MySQL Database Service and Heatwave Virtually all data types from any data source can be integrated into the MySQL Database Service and HeatWave. When you need to collect data from diverse sources in a timely manner, Oracle Cloud Infrastructure (OCI) Data Integration, a cloud native, serverless ETL (extract, load, transform) service on the Oracle Cloud, provides a unified solution for building, deploying, and managing complex data warehouses. Discover how its combined elements of data integration; with a graphical, no-code designer, and interactive data preparation, all powered by Spark ETL or E-LT push-down execution, ensure that information is timely, accurate, and consistent across complex systems. Watch the full webinar (including demo): Speakers: Nicolas de Rico, MySQL Principle Solution Engineer Julien Testut, Senior Principal Product Manager, Oracle Cloud Infrastructure (OCI) Data Integration   Presentation slides: Download the slide presentation   Additional references: Oracle Cloud Infrastructure Data Integration - Documentation Oracle Cloud Infrastructure Data Integration - Connecting to Data Sources MySQL Database Service and HeatWave - Get Started MySQL Database Service and HeatWave - Free Trial MySQL Database Service and HeatWave - Tutorial

  Oracle Cloud Infrastructure Data Integration with MySQL Database Service and Heatwave Virtually all data types from any data source can be integrated into the MySQL Database Service and HeatWave. When...

MySQL 8.0.23 Top Blogs and Webinar

MySQL 8.0.23 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are: Invisible columns, Query attributes, several new (security, InnoDB, GIS, Replication, Group replication, X Protocol, Router, MTR Testsuite) features, and many more. Recorded Webinar: Our webinar covers the many new features introduced with the latest 8.0.23 release, plus the highlights of MySQL 8.0    Top blogs: The MySQL 8.0.23 Maintenance Release is Generally Available A 250x improvement to tablespace truncation in MySQL Server 8.0.23 Announcing MySQL Cluster 8.0.23, 7.6.17, 7.5.21, 7.4.31, and 7.3.32 MySQL Shell 8.0.23 for MySQL Server 8.0 and 5.7 has been released MySQL Workbench 8.0.23 has been released New MySQL NDB Cluster version 8.0.23 Announcing MySQL Server 8.0.23 and 5.7.33 Group profiles in MySQL Enterprise Firewall MySQL 8.0.23: thank you for the contributions MySQL Invisible Column – part I MySQL Invisible Column: part II MySQL Invisible Column: part III MySQL Shell 8.0.23 – What’s New? MySQL Shell AdminAPI – What’s new in 8.0.23?   MySQL Connector/J 8.0.23 has been released MySQL Connector/C++ 8.0.23 has been released MySQL Connector/NET 8.0.23 has been released MySQL Connector/Node.js 8.0.23 has been released MySQL Connector/Python 8.0.23 has been released MySQL Connector/ODBC 8.0.23 has been released Support for Date-Time Types in Connector/J 8.0      

MySQL 8.0.23 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are: Invisible columns, Query...

Announcements

Where's the MySQL Team from January - March 2021

As a continue of the regular blog posts informing you about where you can find MySQL Community & MySQL team at, please find below a list of shows you can join and listen MySQL talks & sessions.  January 2021: Coud Native Meetup/VIRTUAL, January 26, 2021 - URL to be provided later on. Frederic Descamps, the MySQL User Group Manager will talk on: "MySQL 8.0: Why to use MySQL 8.0 and how to deploy a web application in OCI using MySQL Database Service"  Scheduled for January 26th as EMEA & JAPAC session & January 27th, 2021. OpenSource Conference Osaka/VIRTUAL, January 30, 2021 Yoshiaki Yamasaki, the MySQL Principal Solution Engineer, Japan & Asia Pacific will be talking about "Introduction to HeatWave & MySQL Database Services". See session on the website. February 2021: FOSDEM/VIRTUAL, February 6-7, 2021 as usual a MySQL room is open at FOSDEM. Both, David Stokes & Frederic Descamps, the MySQL User Group Managers will speak at FOSDEM this year & much more speakers are up. You can see the full agenda of MySQL Dev room here. Rocky Mountain Oracle Users Group/VIRTUAL, February 8-11, 2021 David Stokes, the MySQL User Group Manager will be talking about "MySQL 8.0 New Features". His talk is scheduled for Feb 8th @ 3:15-4:10 pm. ConFoo/VIRTUAL, February 22-26, 2021 David Stokes, the MySQL User Group Manager got accepted two talks as follows:  Feb 24 @3:15pm: "MySQL 8.0 Update" & Feb 26 @3:00pm: "MySQL Histograms versus Indexes. March 2021: OpenSource Conference Spring/VIRTUAL, Mar 5-6, 2021 Machiko Ikoma, The MySQL Principal Solution Engineer will talk about "MySQL Update". See session on the website. OpenSource 101/VIRTUAL, March 30, 2021 David Stokes, the MySQL User Group Manager is going to have a talk at OS 101. More details will follow up. Watch for further details, the show might be postponed.

As a continue of the regular blog posts informing you about where you can find MySQL Community & MySQL team at, please find below a list of shows you can join and listen MySQL talks & sessions.  January...

MySQL Database Service

MySQL Database Service with HeatWave is now available in 22 regions

We are pleased to announce that MySQL Database Service with HeatWave is now available in 22 regions worldwide.  With the introduction of 10 new regions, MySQL Database Service is now present in all Oracle Cloud Infrastructure commercial regions: North America  US East (Ashburn)  US West (Phoenix)  US West (San Jose)  Canada Southeast (Toronto)  Canada Southeast (Montreal) Latin America  Brazil East (Sao Paulo)  Chile Central (Santiago) EMEA  UK South (London)  UK West (Newport)  Germany Central (Frankfurt)  Switzerland North (Zurich)  Netherlands Northwest (Amsterdam)  Saudi Arabia West (Jeddah)  UAE East (Dubai) APAC  Japan East (Tokyo)  Japan Central (Osaka)  South Korea Central (Seoul)  South Korea North (Chuncheon)  Australia East (Sydney)  Australia Southeast (Melbourne)  India West (Mumbai)  India South (Hyderabad)     You are welcome to try MySQL Database Service and many other services for free. Go to oracle.com/cloud/free and create your free account in Oracle Cloud Infrastructure. Check the documentation for the quick steps to create your MySQL databases.  Visit oracle.com/mysql to learn more about MySQL Database Service and HeatWave. The MySQL team is tirelessly working to continue to evolve MySQL Database Service and HeatWave. More features are coming soon. Stay tuned!

We are pleased to announce that MySQL Database Service with HeatWave is now available in 22 regions worldwide.  With the introduction of 10 new regions, MySQL Database Service is now present in all...

MySQL Best of 2020

2020 was a very eventful year at MySQL. Now that we just started 2021, this is a great time to look back at everything we accomplished in 2020, from the anniversary celebrations to the year's best webinars, the best events, the best customer quotes, the best new product addition, the best animated videos, the best social media contribution, the best social media posts and of course, Larry’s best quote and many more.  Here is our "Best of 2020," enjoy!  (We'd love to see your favorites too.  Feel free to share your favorites with us in the comment section) 1. 2020 was our MySQL 25th anniversary which we shared and celebrated with our friends from Java and PHP.  Watch the animated video that celebrates 25 years of community contribution and loyal customers and users. 2. Introduction of MySQL Database Service with HeatWave. The launch generated over 100 news media articles. Read the full press release and Watch the official launch event 3. First Oracle Developer Live event fully dedicated to MySQL. The event generated over 12k attendees and many more viewers on YouTube.  So, we’ll call it the best MySQL event of the year.  You can visualize the 11 sessions online at: https://developer.oracle.com/developer-live/mysql/ 4. Larry Ellison's awesome quote about the new MySQL Database Service introduction with HeatWave analytics engine.  We’ll call it the best MySQL quote of the year. 5. Top 2020 cloud webinars Migrating from Amazon RDS to Oracle MySQL Database Service Business Benefits of Using MySQL Database Service with HeatWave Getting Started with MySQL Database Service plus HeatWave Migrating From On-Premises to Oracle MySQL Database Service Top Reasons to Use MySQL Database Service Using MySQL Database Service with Oracle Analytics Cloud 6. Top 2020 on-premises webinars Top Features that Makes MySQL 8.0 the Most Innovative Database of the Year Demoing Oracle MySQL Transparent Data Encryption Road to Success to MySQL Discover New Insights for Your High Availability Strategy Benefits of Using MySQL for SQL and NoSQL 7. Top 2020 cloud blog posts  Introducing the MySQL Database Service Introducing MySQL Database Service and MySQL HeatWave Analytics Engine MySQL Database Service Analytics Engine and Oracle Cloud Infrastructure: Run applications and analytics with better performance, scale, and efficiency Migrate from on-premises MySQL to MySQL Database Service Announcing the MySQL Database Service support for E3 shapes Step by Step Guide to Take your MySQL Instance to the Cloud HeatWave for MySQL – Technical Deep Dive How to deploy a MySQL Database Service Instance with PHPMyAdmin Using MySQL Workbench with MySQL Database Service Manage MySQL Database Service (MDS) DB Systems with OCI CLI Deploy the Spring Boot framework on Oracle Cloud Infrastructure with MySQL Database Service. Deploy cloud native applications that use Oracle MySQL Database Service to the cloud. Deploying WordPress on OCI with MySQL Database Service: the easy way ! Using MySQL Database Service for WordPress Testing MySQL Database Service without VPN Testing MySQL Database Service without VPN – part 2 8. Top 2020 on-premises blog posts  New Release: MySQL 8.0.22 - Top Blog Posts MySQL Shell Dump & Load part 1: Demo! More Robust Network Partition Handling in Group Replication MySQL Shell and storing passwords From MySQL Group Replication to MySQL InnoDB Cluster MySQL: who’s filling my error log? MySQL Performance : Understanding InnoDB IO Internals & "Checkpointing" Three Bad MySQL Query Types You May Be Writing InnoDB Data Locking – Part 1 “Introduction” Upgrading from MySQL 5.7 to MySQL 8.0 Part I Upgrading from MySQL 5.7 to 8.0 on Windows Upgrading from MySQL 5.7 to MySQL 8.0 Part II 8. Top customer video testimonials SCSK achieves faster results (0:58) Square Enix sees performance gains (1:00) Credorax Selects MySQL EE for Increased Security 9. Best new feature from the MySQL Certification and Training team MySQL Explorer. Learn MySQL basics and earn the Explorer badge today! 10. Best Social Media contribution: MySQL Database Service in Forza Horizon 4 Shumber Rivera from Mazatlan in Mexico made a video game version of the MySQL racing car.  Congrats to Shumber for the most original social media contribution to MySQL.  Watch his car race.  11. Coolest videos Ready to take MySQL Database Service for a spin? Introduction to MySQL Database Service Introduction to MySQL Enterprise Edition MySQL 25th Anniversary  12. 2020 Product Releases MySQL 8.0.19 MySQL 8.0.20 MySQL 8.0.21 MySQL 8.0.22 MySQL Database Service HeatWave 13. Top white papers Guide to MySQL Database Service with Analytics Engine Guide to HeatWave: Real-time Analytics for MySQL Database Service 14. 2020 Top Customer Cases Studies University of Toronto (Canadian University) French Billet (French online ticket sales company) Seat24 (Swedish e-commerce travel group) Dhimyotis (French Certificate Authority) Flashnetworks  (Israeli optimization solution provider) Globo.com  (media group in Latin America) Mobitel (Sri Lanka's mobile communications provider) NJ India Invest (Indian financial services) K-Bank (the first Internet bank in Korea) BSE  (Asia's number one stock exchange) Timon (Korean e-commerce company) SQUARE ENIX (Japanese gaming company) KDDI  (Japanese telecom company)

2020 was a very eventful year at MySQL. Now that we just started 2021, this is a great time to look back at everything we accomplished in 2020, from the anniversary celebrations to the year's best...

MySQL

HeatWave for MySQL – Technical Deep Dive

In the previous blog we provided insights on how the MySQL database has been enhanced to integrate with a native high performance analytics engine – HeatWave: https://blogs.oracle.com/mysql/breakthrough-enhancements-in-mysql-database-service-with-analytics-engine HeatWave is a new, massively parallel in-memory query engine developed by Oracle Labs designed for accelerating analytic workloads. HeatWave provides 100x-1000x acceleration over MySQL and MySQL compatible databases like Aurora.  There are four key architecture choices which provide compelling performance with HeatWave: An in-memory hybrid columnar format which is conducive to vector processing. A massively parallel architecture which is enabled by a massively partitioned architecture. State of the art algorithms for distributed analytic algorithms. The engine is optimized for Oracle Cloud Infrastructure. Architecture Overview of HeatWave HeatWave uses a columnar in-memory representation that facilitates vectorized processing leading to very good query performance (Figure 1). The data is encoded and compressed prior to being loaded in memory. This compressed and optimized in memory representation is used for both numeric and string data. This results in significant reduced memory footprint which translates to reduced cost for customer. The data organization in memory and query processing is structured such that it is amenable to vector and SIMD processing which leads to reduced interpretation overhead for queries and improves query performance. Figure 1. Vectorized in-memory columnar representation for analytic processing  One of the key design points of HeatWave is to massively partition data across a cluster of analytics nodes, which can be operated upon in parallel in each node (Figure 2). The partitioning is done at near memory bandwidth and the size of the partitions is optimizes for the underlying cache. This enables high cache hits for analytic operations and provides good inter-node scalability. Each analytics node within a cluster and each core within a node can process partitioned data in parallel, including parallel scans, joins, group-by, aggregation and top-k processing.  Figure 2. Massively parallel architecture Data partitioning in the memory of HeatWave can be driven by a workload based mechanism which minimizes data movement across cores or nodes which improves scalability. Partitioning by workload dependent column reduces data movement for queries with join (or group-by) predicates since it provides co-location of data across tables in the join condition.  HeatWave has implemented state of art algorithms for distributed analytic processing (Figure 3). Joins within a partition are processed fast by using vectorized build and probe join kernels. Network communication between analytics nodes is optimized by using asynchronous batch I/Os. Figure 3. Distributed analytic algorithms optimized for OCI  Furthermore, HeatWave has an intelligent scheduler which efficiently overlaps computation tasks with network communication tasks such that while the data is being transferred between nodes, each node is busy doing some computation (Figure 4). This helps achieve good scalability across a large number of servers.   Figure 4. Intelligent scheduler to overlap compute time with network communication HeatWave uses AMD based VM shapes in OCI which provide the best price per GB of DRAM. Various aspects of query processing have been optimized for Oracle Cloud Infrastructure. For example, the granularity of partitioning has been optimized for the cache size of the underlying compute shape, algorithms are NUMA aware, and the query cost model factors in the network bandwidth available in OCI. Machine Learning Based Automation HeatWave leverages auto machine learning (AutoML) capabilities developed in Oracle Labs to automate various aspects of the service. This automation is helpful both for users of the service and engineers managing the service. Since this automation is based on machine learning, the system can intelligently predict and act upon various scenarios.  Auto Provisioning provides a recommendation on how many nodes are required to run a workload. When the service is started, database tables on which analytics queries are run need to be loaded to memory of HeatWave. The size of the HeatWave cluster needed depends on tables and columns required to load data, and the compression achieved in memory for this data. Figure 5 compares the traditional (i.e., manual) approach to estimating the cluster size with Auto Provisioning. In traditional provisioning, the user needs to guess a cluster size. Underestimation results in data load or query execution failure due to space limitations. Overestimation results in additional costs for unneeded resources. As a result, users iterate until they determine the right cluster size and this size estimate becomes inaccurate when tables are updated.  The right side of figure 5 shows how we solve this problem with Auto Provisioning, which is a ML-based cluster size estimation advisor. By leveraging well trained and accurate ML models, the user consults the Auto Provisioning model to obtain the right cluster size for their dataset. As a result, users do not need to guess the cluster size. Later, if the customer data grows or additional tables are added, the users can again take advantage of Auto Provisioning advisor. Figure 5 – Comparison of a traditional Provisioning Flow vs Auto Provisioning The techniques mentioned in this blog help HeatWave achieve good performance both on a single node as well as scale very well across a large number of nodes. Additionally, since HeatWave is optimized for the underlying OCI platform, it is very cost effective. In the next blog we will share performance

In the previous blog we provided insights on how the MySQL database has been enhanced to integrate with a native high performance analytics engine – HeatWave: https://blogs.oracle.com/mysql/breakthrough...

MySQL recorded sessions from September - November 2020

Since Covid situation we are - same as the rest of the world - switching most of our MySQL sessions & workshops to the online word. Below you can find the list of events with MySQL recorded sessions we run during the time from September to end of November and also some interesting videos as well. So, the list below includes publicly available recorded MySQL sessions (or slides only) from conferences end other events we attended.  September 2020: Open Source Conference Hiroshima, September 19, 2020 MySQL Session: "MySQL Update" Speaker: Yoshiaki Yamasaki, the MySQL Principal Solution Engineer URL: Agenda, Recording Zabbix China, Sepbember 25-26, 2020 MySQL Session (translated): “Insight into MySQL performance and troubleshooting methods” Speaker: Ivan Ma, the MySQL Principal Solution Engineer URL: Agenda, Recording MySQL User Group Session, September 28, 2020 MySQL Session: "Introducing MySQL GIS feature" Speaker: Yoshiaki Yamasaki, the MySQL Principal Solution Engineer URL: OSC Hiroshima Agenda, Recording October 2020: Oracle Developer Live MySQL (Americas), October 13, 2020 On the link above you can find following sessions:  Tomas Ulin, Vice President, MySQL Engineering, Oracle Mark Leith, MySQL Software Development Director, Oracle at Keynote of "MySQL 8.0: A Giant Leap for SQL” Nicolai Plum, Database Engineer, Booking.com on "MySQL 8 at Booking.com” Miguel Araújo, Principal Software Developer, Oracle on "MySQL Shell: The Best MySQL DBA Tool” Sugu Sougoumarane, Chief Technology Officer, Vitess on "Sharding MySQL with Vitess and Kubernetes” Rui Quelhas, Principal Software Developer, Oracle on "Developing NodeJS Applications with MySQL 8.0” Juan Rene Ramirez Monarrez, Software Developer Manager, Oracle on "Migrating to MySQL Database Service: Made Fast and Easy!” Luís Soares, Software Development Director, Oracle on "MySQL Replication Performance, Efficiency, and the Cloud” Harin Vadodaria, Principal MTS, Oracle Mike Frank, Product Management Director, MySQL, Oracle on "MySQL Security Model in Oracle Cloud” Kenny Gryp, MySQL Product Manager, Oracle on "MySQL Database Architectures” Jesper Wisborg Krogh, Principal Database Reliability Engineer, Okta on "MySQL Performance Tuning” Fred Descamps at Panel about "SQL or NoSQL? Schema or Schemaless?” URL: recording of the above sessions, More at Fred’s blog OpenSUSE & LibreOffice 2020, October 15-17, 2020 MySQL Session: "MySQL Clone: a better way to copy MySQL databases" Speaker: Georgi Kodinov, MySQL Software Development Director URL for slide deck only All Things Open 2020, Oct 19-20, 2020 MySQL Session: "MySQL Indexes, Histograms, Locking Options, and Other Ways to Speed Up Your Queries" Speaker: David Stokes, the MySQL Community Manager Recording Percona Live Online, October 21, 2020  Speaker: Fred Descamps, the MySQL Community Manager: MySQL Session: "MySQL 8.0 Document Store – Discovery of a New World”  URL: recoding MySQL Session: "The State of the Dolphin (keynote)" URL: recording More at Fred’s blog MySQL Webinar, October 22, 2020 Webinar title: "Running Drupal with Oracle MySQL Database Service" Webinar Speaker: Fred Descamps, the MySQL Community Manager URL: recording, More at Fred’s blog OSC Tokyo/Fall, October 23-24, 2020 MySQL Session: "MySQL Update" Speaker: Machiko Ikoma URL: recording MOPCON, October 24-25, 2020 ​​​​​​​MySQL Session: " Speaker: Ivan Tu, the MySQL Solution Engineering Manager URL: recording  November 2020: PyCon HK, November 6-7, 2020 MySQL Session: "Python is your friend with MySQL Shell" Speaker: Ivan Ma, the MySQL Principal Solution Engineer URL: recording MySQL Session: "Develop Cloud-Native Application using MySQL Database Service" Speaker: Ryan Kuan, the MySQL Master Principal Solution Engineer URL: recording OpenSource Lisbon, November 11, 2020 MySQL Session: "A step by Step Guide on Taking your MySQL instance to the Cloud" Speaker: Keith Hollman, the Principal Solution Engineer URL: recording MoodleMoot, November 16-18, 2020 Speaker: Keith Hollman, the Principal Solution Engineer MySQL Session: "Why clone your Moodle MySQL?" URL: recording MySQL Session: "Moodle available and scalable with MySQL HA" URL: recording DOAG, November 17-19, 2020 MySQL Session: "MySQL 8.0" Speaker: Carsten Thalheimer, The MySQL Solution Engineer URL for slide deck only Other Recordings you might find interesting:  OpenSource Summit North America & Embedded Linux conference NA, June 29-July 2, 2020 ​​​​​​​MySQL Session: "MySQL New Features" Speaker: David Stokes, the MySQL Community Manager URL: Recording ​​​​​​​cPanel Live with David Stokes, the Community Manager of MySQL, presenting MySQL 8.0 Changes for DBAs and DevOps, with a live Q&A session, August 12, 2020 ​​​​​​​Recording ​​​​​​​DebConf2020, Augst 23-29, 2020 ​​​​​​​MySQL Session: "MySQL Replication Update" Speaker: David Stokes, the MySQL Community Manager Recording ​​​​​​​Dave's MySQL Basis: ​​​​​​​David Stokes, the MySQL Community Manager talking about MySQL Basis. The playlist includes 13 Episodes right now! ​​​​​​​Recording ​​​​​​​​​​​​​​Jim Grisanzio talks with MySQL community managers Dave Stokes and Frederic Descamps about the latest news on the project, contributing, and upcoming online events. ​​​​​​​Recording​​​​​​​ "JSON Document Validation" by David Stokes, the MySQL Community Manager Recording "Python & MySQL", August 11, 2020 Recording Deploy Friday webcast, November 6, 2020 Live streaming video with Airton Lastori, the MySQL Principal Product Manager & David Stokes, the MySQL Community Manager talking about MySQL as a leading open-source database management system. Recording​​​​​​​

Since Covid situation we are - same as the rest of the world - switching most of our MySQL sessions & workshops to the online word. Below you can find the list of events with MySQL recorded sessions...

Introducing MySQL Database Service and MySQL Analytics Engine

This blog includes: Keynote video highlights, link to full keynote, product demo, testimonial, upcoming MySQL Database Service and Analytics Engine webinars, and more. Today the MySQL team introduces the MySQL Database Service with the MySQL Analytics Engine. 1. Watch the keynote highlights:   2. See the MySQL Database Service in action in a short product demo.    3. Learn about the great savings over competitive solutions such as Amazon Aurora, RDS, or Redshift.    4. Discover how the MySQL Analytics Engine accelerates MySQL queries by 400x. 5. Check out the benchmarks: MySQL Database Service with Analytics Engine is 1100x faster than Amazon Aurora, 2.7x faster than Redshift, and scale to 1000s of cores. 6. Read some of the customer testimonials and how they increased their query performance compared to other cloud solutions: "MySQL Analytics Engine is 10 times faster than the analytics service of another major cloud vendor. Now there is no need for ETL. Compared to MySQL on-premise, the MySQL Analytics Engine is 4,000 times faster.”  - Tetsuro Ikeda, Manager of Cloud IT Architecture Service Department,  SCSK Corporation   7. Watch the full keynote 8. Reference blogs: Breakthrough Enhancements in MySQL Database Service with Analytics Engine MySQL Database Service Analytics Engine and Oracle Cloud Infrastructure: Run applications and analytics with better performance, scale, and efficiency 9. Upcoming live webinars: 12/08/20 - 9AM PT - Business Benefits of using MySQL Database Service with the MySQL Analytics Engine 12/09/20 - 9AM PT - Getting Started with Oracle MySQL Database Service and MySQL Analytics Engine 12/14/20 - 9AM PT - Migrating from Amazon RDS to Oracle MySQL Database Service 12/16/20 -9AM PT - Using MySQL Database Service with Oracle Analytics Cloud 10. Other references: Official Press Release Whitepaper Oracle MySQL: https://www.oracle.com/mysql/ Try it now  

This blog includes: Keynote video highlights, link to full keynote, product demo, testimonial, upcoming MySQL Database Service and Analytics Engine webinars, and more. Today the MySQL team introduces...

Breakthrough Enhancements in MySQL Database Service with Analytics Engine

  Introduction MySQL is the most popular open-source database and many organizations choose MySQL to store their valuable enterprise data. MySQL is optimized for OLTP, but it is not designed for analytic processing. As a result, organizations which need to efficiently run analytics on data stored in MySQL database move their data to another database to run analytic applications. This approach of moving data to another database introduces complexity and additional cost for customers in multiple ways: Applications need to define complex logic for extracting relevant data from MySQL. The extracted data needs to be transported to another database across networks in a secure way, which takes times and creates security vulnerability. Data in the other database needs to be manually kept in sync with the MySQL database and as a result the data on which analytics is performed can become stale. Managing multiple databases and running OLTP and analytics applications on different databases incurs additional overhead and cost. MySQL Analytics Service is designed to enable customers to run analytics on data which is stored in MySQL database without the need for ETL (Figure 1) Figure 1. Both OLTP and OLAP applications can be run with MySQL Database Service with Analytics Engine This service is built on an innovative in-memory analytics engine which is architected for scalability and performance and is optimized for Oracle Cloud Infrastructure (OCI). Various aspects of this service leverage machine learning driven automation, reducing database administrative costs. This results in a very performant solution for SQL analytics at a fraction of the cost compared to other industry solutions. Customers who use MySQL Analytics Service will benefit from significantly better performance, a single database for their OLTP and analytic needs, not requiring ETL, support for real-time analytics and significantly reduced cost. Deployment Scenarios With the native MySQL Analytics Engine, customers who store data in MySQL database service can seamlessly run analytics. A MySQL Analytics instance is a cluster composed of a MySQL database service VM and one or more analytics nodes, as shown in red in Figure 2. When analytics is enabled, an analytics plugin is installed on the MySQL database node which is responsible for cluster management, loading data into the memory of the analytics nodes, query scheduling, query execution, and returning query results to the MySQL database. Applications continue to interact with the MySQL database without requiring to be aware of the MySQL Analytics Engine.  An Analytics node is a VM instance that stores data in memory for analytics processing and executes analytics queries. The number of nodes needed for a workload depends on the amount of data which needs to be processed, the compression factor which is achieved on the dataset, and the query characteristics. On an average, users can expect to load around 400GB of data per analytics node.  The exact number of nodes needed for a schema can be predicted by the auto provisioning advisor which is a part of the service. Figure 2. MySQL Analytics exclusively available in Oracle Cloud MySQL Analytics currently supports up to 24 analytics nodes in one cluster which translates to processing capacity of approximately 10 TB of analytics data. 10TB is the amount of data which can be populated in the memory of the analytics nodes at a given moment. There is no limit to the amount of data which can be stored in the MySQL database and customers can choose which tables or columns from MySQL database schema to load into the memory of analytics nodes. If the tables are no longer needed by queries, user can remove the tables from the memory to make room for other data. MySQL Database Service with Analytics Engine is a great solution for customers who need to run hybrid transactional and analytical workloads. Transactional queries are processed in the MySQL database node and data updated in MySQL InnoDB is transparently propagated to the analytics cluster in real-time. This enables customers to run both OLTP and real-time analytics simultaneously within a single database. On premise customers who cannot move their MySQL deployment to a Cloud because of compliance or regulatory requirements, can still leverage MySQL Analytics by using the hybrid deployment model as shown in Figure 3. In a hybrid deployment, customers can leverage MySQL replication to replicate on-premise MySQL data to MySQL Analytics without the need for ETL. Figure 3. Hybrid deployment for enabling analytics on data stored on premise Native Real-Time Analytics for MySQL Integration of MySQL Analytics Engine with MySQL Database Service provides a single data management platform for all OLTP and analytics needs of an enterprise. MySQL Analytics is designed as a native engine, and completely shields all the low-level implementation details at the storage level from the end users. As a result, users can manage both MySQL Analytics Engine and MySQL Database Service with the same management tools including OCI console, REST API, and command line interface. Since MySQL Analytics is an in-memory processing engine, data is only persisted in MySQL InnoDB storage engine. This allows users to manage analytics data the same way they manage transactional data in MySQL. Users connect to MySQL database node through standard tools and standard-based ODBC/JDBC connectors. Queries which can be accelerated by the analytics engine are transparently pushed by the MySQL database node to the analytics engine. This enables existing applications to take advantage of the MySQL Analytics Engine without any changes to their application, allowing easy and quick integration. Once users submit a query, the MySQL query optimizer transparently decides if the query should be offloaded to the analytics engine for accelerated execution (Figure 4). This decision is based on whether all operators and functions referenced in the query are supported by the MySQL Analytics Engine and if the estimated time to process the query with the analytics engine is less than with MySQL. If both conditions are met, the query is pushed to analytics nodes for processing. Once processed, the results are sent back to the MySQL database node and returned to the application. Figure 4. MySQL Database enhanced to natively integrate analytic processing Since the MySQL optimizer has been enhanced to transparently push the queries to the analytics engine, there is no change required in queries for them to be accelerated by the MySQL analytics engine. As a result, all existing MySQL compatible tools and applications work seamlessly with the MySQL Analytics Engine (Fig 5). Figure 5. All MySQL compatible tools and application run with the MySQL Analytics Engine Data for the MySQL Analytics Engine is persisted in MySQL InnoDB. Any updates to the tables are automatically propagated to the memory of the analytics nodes, as a result subsequent queries always see the updated data. This is done behind the scene by a light-weight change propagation algorithm that can keep up with MySQL data update rates. Changes made by an OLTP application are visible in real-time to analytic queries. Please stayed tuned for the next blog where we will provide insights into the innovations of the MySQL Analytics Engine.

  Introduction MySQL is the most popular open-source database and many organizations choose MySQL to store their valuable enterprise data. MySQL is optimized for OLTP, but it is not designed for...

MySQL Database Service

Announcing the MySQL Database Service support for E3 shapes

The MySQL team is excited to announce that MySQL Database Service now supports the Oracle Cloud Infrastructure Compute E3 shapes. These shapes are built on the 2nd Gen AMD EPYC 7742 processor, with a base clock frequency of 2.25 GHz and max boost of up to 3.4 GHz, and have better networking and more RAM. MySQL Database Service customers can start to use them today and enjoy higher performance and flexibility for diverse MySQL workloads. The MySQL Database Service E3 offers up to 64 OCPUs (128 virtual cores) and 1 TB of RAM. This is the highest core count and memory for a single fully managed MySQL instance on any public cloud. The new E3 shapes for MySQL Database Service are available in eight commercial regions: Brazil East (Sao Paulo), Canada Southeast (Toronto), Germany Central (Frankfurt), India West (Mumbai), Japan East (Tokyo), UK South (London), US East (Ashburn), and US West (Phoenix). More regions are coming soon. If you already have an Oracle Cloud Infrastructure account, you can easily start to use the new E3 shapes in the MySQL Database Service. Just access the web console and under the MySQL menu, select DB System and Create. You will be able to choose between the following options: Shape name OCPUs RAM (GB) MySQL.VM.Standard.E3.1.8GB 1 8 MySQL.VM.Standard.E3.1.16GB 1 16 MySQL.VM.Standard.E3.2.32GB 2 32 MySQL.VM.Standard.E3.4.64GB 4 64 MySQL.VM.Standard.E3.8.128GB 8 128 MySQL.VM.Standard.E3.16.256GB 16 256 MySQL.VM.Standard.E3.24.384GB 24 384 MySQL.VM.Standard.E3.32.512GB 32 512 MySQL.VM.Standard.E3.48.768GB 48 768 MySQL.VM.Standard.E3.64.1024GB 64 1024   Start your 30-day free trial at oracle.com/cloud/free. Get access to a wide range of Oracle Cloud services for 30 days, including MySQL Database Service. Check the documentation for the quick steps to create your MySQL databases. Drop by and check it out!

The MySQL team is excited to announce that MySQL Database Service now supports the Oracle Cloud Infrastructure Compute E3 shapes. These shapes are built on the 2nd Gen AMD EPYC 7742 processor, with a...

Announcements

Where's the MySQL Team from November to December 2020

We would like to return to the regular announcements about where you can find MySQL Community team or MySQL experts at during Nov-Dec 2020. All the shows we are attending by end of this year are mostly VIRTUAL ones. Please find more details below: November, 2020: GSCC2020/Virtual, November 14, 2020, @5:00-6:00pm David Stokes, the MySQL Community Manager will be talking about "MySQL 8.0 Features for Developers" MoodleMoot Global/Virtual, Nov 16-18, 2020 Please visit our online booth where you can talk online to our booth staff. Please also watch the Mainstreaming place where MySQL talk will be hold. Second talk will be in our own meeting room.  DeveloperWeek Austin 2020/Virtual, November 17-18, 2020 David Stokes, the MySQL Community Manager will be talking about "MySQL 8.0 Features for Developers", Dave's talk is scheduled for Nov 17 @ 1:30-1:55pm DOAG 2020/Hybrid event, November 17-19, 2020 Join our team at the online booth as well as watch the organizers schedule for two MySQL speaking slots given by Carsten Thalheimer, the MySQL Master Principal Sales Consultant. Also do not miss the MySQL Keynote!!!  JCConf/Hybrid event, Taipei, Taiwan, November 19, 2020 This event is partially virtual and partially physical event. You can find us at our physical booth in the venue as well as find a quick MySQL 3 mins talk before the session in the agenda. Do not miss to come to talk to our team at the physical booth at JCConf 2020. OSC Fukuoka/Virtual, November 28, 2020 Do not miss the MySQL talk on "Introducing the new MySQL backup tool and the MySQL Database Service", given by Yoshiaki Yamasaki, the MySQL Principal Solution Engineer. December, 2020: Open Developers Conference 2020/Virtual, Dec 19, 2020 Do not miss the MySQL session given by local MySQL team, please watch organizers' website for further update.

We would like to return to the regular announcements about where you can find MySQL Community team or MySQL experts at during Nov-Dec 2020. All the shows we are attending by end of this year are...

New Release: MySQL 8.0.22 - Top Blog Posts

MySQL 8.0.22 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are: Prepared Statements, SHOW PROCESSLIST, TIMESTAMP, Read Only Schema, Error Log, User Management, Optimizer, Replication, Keyring, Router, and more.   Here are the MySQL 8.0.22 top blog posts: Server The MySQL 8.0.22 Maintenance Release is Generally Available Announcing MySQL Server 8.0.22, 5.7.32 and 5.6.50 Shell MySQL Shell 8.0.22 for MySQL Server 8.0 and 5.7 has been released What’s New in MySQL Shell 8.0.22 Replication Posted: MySQL 8.0.22 Replication Enhancements Connectors MySQL Connector/J 8.0.22 has been released MySQL Connector/NET 8.0.22 has been released MySQL Connector/Python 8.0.22 has been released MySQL Connector/C++ 8.0.22 has been released MySQL Connector/Node.js 8.0.22 has been released MySQL Connector/ODBC 8.0.22 has been released Workbench MySQL Workbench 8.0.22 has been released Integration MySQL announces integration with Oracle Cloud Infrastructure Vault Community MySQL 8.0.22: thank you for the contributions MySQL Router 8.0.22 enables REST API at bootstrap   Download MySQL 8.0.22 now: https://www.mysql.com/downloads/    

MySQL 8.0.22 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are: Prepared Statements,...

Announcements

MySQL announces integration with Oracle Cloud Infrastructure Vault

Securing a database includes securing sensitive items such as keys, passphrases, and more. Its important to protect from misuse by using cryptography and access controls to limit usage by only the components that use them or admins that require them.  MySQL Enterprise Keyring security component provides integrated Key Management with secure vaults.  Starting with MySQL Enterprise Edition 8.0.22, you can manage your MySQL encryption keys for TDE and more using the Oracle Cloud Infrastructure Vault. OCI Vault lets you securely store, manage, and audit arbitrary keys and other types of configuration secrets using the Oracle Cloud Infrastructure's Vault service.  Oracle Infrastructure Vault - Console - Shown Storing MySQL Secrets This integration with the OCI Vault leverages the MySQL keyring with the addition of the "keyring_oci" plugin. This keyring plugin communicates with Oracle Cloud Infrastructure Vault for back end storage. No key information resides in MySQL server local file system storage.  Benefits of using OCI Vault for your MySQL Key Management: Fully Managed Service - but you control access to your keys and secrets Centralized - common repository to manage keys and secrets and define policies. Auditing - Know exactly details of secrets usage: where, by whom, and when.  Compliance - Vault drives controls required for regulatory compliance.  HA - Keys replicated within a region to ensure the durability and availability of the keys.  Secure - FIPS compliant.  No Cost if you have an Oracle Cloud Tenancy. For technical details  MySQL Keyring for OCI Vault MySQL Enterprise Security Overview MySQL 8.0 Secure Deployment Guide  

Securing a database includes securing sensitive items such as keys, passphrases, and more. Its important to protect from misuse by using cryptography and access controls to limit usage by only the...

A Step by Step Guide to Take your MySQL Instance to the Cloud

You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag. Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details: - Export the data - Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course) - Create my MySQL cloud instance. - import the data into the cloud instance. - Redirect the application to the cloud instance. All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available.  Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed. Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/  Let’s get prepared then: Main objective: keep application outage time down to minimum. Preparation: You have an OCI account, and the OCI CLI configuration is in place. MySQL Shell 8.0.21 is installed on the on-premise environment. We create an Object Storage bucket for the data upload. Create our MySQL Database System. We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here. Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS. So, now for our OCI environment setup. What do I need? Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.) What we need is the config file from .oci, which contains the following info: You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”. Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it. The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket. MySQL Shell 8.0.21 install on-premise. Make a bucket. I did this via the OCI console. This creates a Standard Private bucket. Click on the bucket name that now appears in the list, to see the details. You will need to note down the Name and Namespace. Create our MySQL Database System. This is where the data will be uploaded to. This is also quite simple. And hey presto. We have it. Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons. On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step. For example, IP Address is 10.0.0.4. Create our Endpoint Compute instance. In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server. Here we’ll install MySQL Router to be our proxy for external access. And we’ll also install MySQL Shell to upload the data from our Object Storage bucket. For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b. First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned. Router & Shell install ‘n’ configure Test connectivity. Test MySQL Router as our proxy, via MySQL Shell: $ mysqlsh root@kh01:3306 --sql -e 'show databases' Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try: $ mysqlsh root@<public-ip>:3306 --sql -e 'show databases' If you get any issues here, check your ingress rules at your VCN level. Also, double check your o.s. firewall rules on the freshly created compute instance too. Preparation is done. We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment. Let’s get the data streaming. MySQL Shell Dump Utility In effect, it’s here when we’ll be ‘streaming’ data. This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS. First of all, I want to check the commands with “dryRun: true”. util.dumpSchemas dryRun From our own environment / on-premise installation, we now want to dump / export the data: $ mysqlsh root@OnPremiseHost:3306 You’ll want to see what options are available and how to use the util.dumpSchemas utility: mysqlsh> \help util.dumpSchemas NAME       dumpSchemas - Dumps the specified schemas to the files in the output                     directory. SYNTAX       util.dumpSchemas(schemas, outputUrl[, options]) WHERE       schemas: List of schemas to be dumped.       outputUrl: Target directory to store the dump files.       options: Dictionary with the dump options. Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So: util.dumpSchemas( ["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"] } ) ["test"]               I just want to dump the test schema. I could put a list of                                schemas here.      Careful if you think you can export internal                                      schemas, ‘cos you can’t. “test”                             is the “outputURL target directort”. Watch the prefix of all the                        files being created in the bucket.. options: dryRun:             Quite obvious. Change it to false to run. showProgress:                 I want to see the progress of the loading. threads:              Default is 4 but choose what you like here, according to the                                        resources available. ocimds:              VERY IMPORTANT! This is to make sure that the                                      environment is “MDS Ready” so when the data gets to the                             cloud, nothing breaks. osBucketName:   The name of the bucket we created. osNamespace:                 The namespace of the bucket. ociConfigFile:    This is what we looked at, right at the beginning. This what makes it easy.  compatibility:                There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS. Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try! I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump: util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true}) Acquiring global read lock All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Preparing data dump for table `test`.`reviews` Writing DDL for schema `test` Writing DDL for table `test`.`reviews` Data dump for table `test`.`reviews` will be chunked using column `review_id` (I created the primary key on the review_id column and got rid of the following warning at the end:) WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file. Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following: util.LoadDump dryRun Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data: $ mysqlsh root@kh01:3306 util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"}) As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory. Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix='test' using 8 threads. Util.loadDump: Failed opening object '@.json' in READ mode: Not Found (404) (RuntimeError) This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second. You want to do some “streaming”? With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh: On-premise: dry run: util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]}) real: util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]}) OCI Compute host: dry run: util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180}) real: util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180}) They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time: On-premise: At the OCI compute host you can see the waitDumpTimeout take effect with: NOTE: Dump is still ongoing, data will be loaded as it becomes available. In the osBucket, we can now see content (which is what the loadDump is reading): And once it’s all dumped ‘n’ uploaded we have the following output: If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute) Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!! Conclusion

You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have...

MySQL Database Service

MySQL Database Service - Webinar Series

We have scheduled a series of several live webinars about MySQL Database Service, the only service 100% developed, managed and supported by the MySQL Team.   Our webinars will cover a full introduction to the new service.  We will discuss the business benefits of using the MySQL Database Service compared to on-premises and vs services of “forked” cloud versions of MySQL.  We will show you how to migrate your existing database from on-premises directly into the new service.  We will demonstrate how to migrate data from RDS to the MySQL Database Service.  And if you happen to use WordPress, we will also show you how to run WordPress with the MySQL Database Service.  __________________________________________________________________________ Date: 9/9/2020 @9AM PST Title: Introduction to MySQL Database Service Description: The MySQL Database Service is the only database service that is 100% developed, managed, and supported by the MySQL team. Discover how the new database service makes it easy for organizations to deploy cloud native applications using the world’s most popular open source database. See how it delivers significant savings over on-premises database management and over “forked” versions from third party Cloud platforms. Register now __________________________________________________________________________ Date: 9/10/2020 @9AM PST Title: Top Reasons to Use the MySQL Database Service Description: Discover the top reasons businesses should use the MySQL Database Service. Learn the many benefits of a fully managed database service over on-premises database management. See the key competitive advantage of having your MySQL Database Service managed and supported by the MySQL Team. Check the savings over on-premises and over competitive cloud offerings. Register now __________________________________________________________________________ Date: 9/14/2020 @9AM PST Title: Migrating from on-premises to MySQL Database Service Description: See how easy it is to migrate from on premises to the MySQL Database Service. It is the only fully managed public cloud service to provide MySQL Enterprise Edition for the highest levels of MySQL security, reliability and uptime, and is built and managed by the same development team that develops MySQL. As a result, MySQL Database Service is 100% compatible with on-premises MySQL, allowing for a seamless transition to the public cloud or a hybrid cloud model.  Register now  __________________________________________________________________________ Date: 9/15/2020 @9AM PST Title: Migrating from Amazon RDS to Oracle MySQL Database Service Description: Migrate from Amazon RDS data to the MySQL Database Service on the highly secure OCI Gen2 cloud platform. Learn the quick steps and best practices for your database migration. Discover the key benefits of getting the MySQL team latest fixes, new features and support. See how MDS delivers up to 70% TCO savings over Amazon RDS. Register now __________________________________________________________________________ Date: 9/16/2020 @9AM PST Title: Running Wordpress with MySQL Database Service Description: Discover how to install Wordpress on the Oracle Cloud using MySQL Database Service. The webinar will cover the entire installation steps and will demo how to migrate a running Wordpress application to MDS with minimal maintenance time. You will learn how to use MDS, MySQL Shell Dump & Load and MDS Inbound Replication.   Register now  __________________________________________________________________________ Date: 10/21/2020 @9AM PST Title: Introduction to MySQL Database Service Description: The MySQL Database Service is the only database service that is 100% developed, managed, and supported by the MySQL team. Discover how the new database service makes it easy for organizations to deploy cloud native applications using the world's most popular open source database. See how it delivers significant savings over on-premises database management and over "forked" versions from third party Cloud platforms.   Register now  __________________________________________________________________________ Date: 10/22/2020 @9AM PST Title: Running Drupal with Oracle MySQL Database Service Description: Discover how to install Drupal in the Oracle Cloud using MySQL Database Service. The webinar will cover the entire installation steps and will demo how to migrate a running Drupal application to MDS with minimal maintenance time. You will learn how to use MDS, MySQL Shell Dump & Load and MDS Inbound Replication.   Register now  __________________________________________________________________________ See the full list of MySQL webinars, including MySQL Database Service webinars at: https://www.mysql.com/news-and-events/web-seminars/  

We have scheduled a series of several live webinars about MySQL Database Service, the only service 100% developed, managed and supported by the MySQL Team.   Our webinars will cover a full introduction...

MySQL Database Service

Introducing the MySQL Database Service

The MySQL team is thrilled to introduce the MySQL Database Service in the Oracle Cloud Infrastructure (OCI), the only service 100% developed, managed, and supported by the MySQL Team. For those not familiar with MySQL Database Service, it is the same MySQL you know and use, now available in the cloud as a fully-managed service running on Oracle Generation 2 Cloud Infrastructure. It automates time-consuming tasks such as MySQL instances provisioning, patches and upgrades, and backups and restores. Users can easily scale MySQL, monitor cloud resources, and implement security best practices to meet regulatory requirements. Customer applications simply access the MySQL databases via standard MySQL protocols. The typical administrative tasks are automated, integrated, and accessible via the OCI web console, REST API, CLI, or DevOps tools. The MySQL Database Service is available in multiple OCI Regions and quickly expanding to meet your data governance and location requirements. More data center regions are on the way, stay tuned. Built by the MySQL Engineering Team One of the biggest advantages of the MySQL Database Service is that it is developed, managed, and supported by the MySQL Engineering Team. Let’s take a look at why this is so important. MySQL Database Service is developed and maintained by the same team that develops MySQL Community and Enterprise Editions. The service is always up-to-date with the latest MySQL Server release and includes the latest security fixes. You get all the features available in the MySQL Server, including the NoSQL Document Store with the new X-Protocol and X DevAPI. MySQL Database Service has built-in default configurations recommended by the MySQL experts for the best performance while keeping the database secure and stable. The configuration management feature also allows advanced users to customize configurations safely. Because the same MySQL Server is used in the cloud, on-premises, and for hybrid deployments, you get maximum flexibility for your deployment strategy. Customers can use MySQL native replication and MySQL Shell to quickly and easily move workloads to the cloud. Unlike other Cloud Database services that offer limited MySQL support, the MySQL Database Service is backed by the MySQL Support Team at no additional cost. MySQL Database Service is built on top of Oracle Cloud Infrastructure as an ultra-secure native service, leveraging the benefits of a Cloud that is architected and designed to run mission-critical Enterprise workloads and databases. Pricing: Amazon RDS Costs 3.6x More MySQL Database Service is the most affordable database service among the major cloud providers – Amazon RDS, Microsoft Azure, and Google Cloud SQL. Amazon RDS is 3.6x more expensive than MySQL Database Service. Comparisons with other public cloud services are available below, along with the configuration that was used for comparison. MySQL Database Service: Standard E2 AMD 8GB/Core, all regions have the same price. Amazon RDS: Intel M5 8GB/Core, AWS US East. Azure: General Purpose Intel 10GB/Core, MS Azure US-East. Google: N1 Standard Intel 7.5GB/Core, GCP Northern Virginia. Configuration: 100 OCPUs, 1 TB Storage. It is also important to note that MySQL Database Service has the same price for all OCI regions while other services may have a significantly higher price for different regions. Additionally, users can even realize significant storage and backup savings compared to other cloud providers. You can use the Cost Estimator tool for your specific scenario. MySQL Database Service is available under the Data Management option. The consumption models are simple: pay-as-you-go billed monthly or committed annual flex consumption with an extra discount. Getting Started Try MySQL Database Service Now. Oracle offers a 30-Day free trial with $300 in free credits with access to a wide range of services including MySQL Database Service, Compute and Storage. Create an account and start for free. Check the documentation for the quick steps to create your MySQL databases. Additional Resources oracle.com/mysql Cost Estimator OCI Free Trial Documentation

The MySQL team is thrilled to introduce the MySQL Database Service in the Oracle Cloud Infrastructure (OCI), the only service 100% developed, managed, and supported by the MySQL Team. For those not...

Announcements

Top Blog Posts about MySQL 8.0.21

As with any new releases, MySQL 8.0.21 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are: Account Management Notes, JSON Notes, Authentication Notes and changes related to InnoDB, Optimizer, Group Replication, and more. Here are the MySQL 8.0.21 top blog posts: Server: - The MySQL 8.0.21 Maintenance Release is Generally Available - MySQL 8.0 – Who stopped mysqld and how long did it take? - MySQL: who’s filling my error log? Shell: - MySQL Shell Dump & Load part 1: Demo!  - MySQL Shell Dump & Load part 2: Benchmarks  Optimizer: - PHP With MySQL: JSON_VALUE() now in MySQL 8.0.21  InnoDB: - MySQL – Keep an eye on your auto_increment values  Replication: - MySQL 8.0.21 Replication Enhancements  - More Robust Network Partition Handling in Group Replication  - START GROUP_REPLICATION can now take recovery credentials as parameters  - Group Replication SYSTEM messages in the error log - Specify Recovery IP Addresses in Group Replication  - MySQL Group Replication – Default response to network partitions has changed - You Can Now Use Binary Log Checksums with Group Replication Security: - Help with MySQL and Data for PHP Developers  Certifications: - MySQL 8.0 Certifications   community blogs: - MySQL 8.0.21: thank you for the contributions - Changes that I like in the new MySQL 8.0.21  Download MySQL 8.0.21 now: https://www.mysql.com/downloads/ 

As with any new releases, MySQL 8.0.21 introduces many new improvements and updates, many of which deserve their own blog post for a deep dive into the new features.  Among the notable changes are:...

MySQL

MySQL Day Virtual Event: 5 Sessions in 1 Day

MySQL Day Virtual Event Join us on July 29th, 2020 (8AM - 1PM PST) for a virtual event about why and how to upgrade to MySQL 8.0.  Learn the key reasons you should upgrade to 8.0. Discover the best practices developed by our support team based on their experience working directly with customers.  Get tips and techniques from our community evangelists.  Find out why the University of California at Irvine chose to upgrade to 8.0, learn about their process, their experience, and the improvements to their application performance. Register to the event and attend the sessions of your choice.  Sessions are running on the hour so you can easily plan your agenda around your session interest.  Each session will last approximately 30-40 minutes with a 15 minute Q&A, followed by 5-15 minute break in between sessions. AGENDA: About the sessions: Best Practices Tips | Upgrading to MySQL 8.0 Upgrading to MySQL 8.0 is easy and straight forward. This webinar will go over the upgrade process, the requirements, and how to use the new MySQL shell to determine potential problems before the upgrade. You will learn the dos and don’ts. After the webinar, you will know all you need to know to upgrade smoothly and effectively to MySQL 8.0. MySQL 8.0 Through the Eyes of the MySQL Support Team  Working directly with customers, the support team answers many questions related to upgrading to MySQL 8.0. This session will cover some of the best practices developed by the support team as well as Dos and Don’ts to be considered before upgrading. Customer Insights from UC Irvine In this session, you will get a quick introduction to UCI, why they chose MySQL, the system environment, number of servers, and some details about the architecture. The session will cover the reasons UCI chose to upgrade to 8.0, the upgrade process, and the upgrade experience.  MySQL 8.0 Indexes, Histograms, and Other Ways to Speed Up Your Queries Indexes have been used for years to speed up database queries but are badly understood and often over utilized. The system overhead of maintaining indexes can become burdensome. Plus MySQL has several types of indexes like multi-valued and functional that can add great value if used properly. Histograms were introduced in MySQL 8.0 and are great for data with little churn but they need to be properly established.This webinar will cover indexes, histograms, and some other 'secret' techniques to make your MySQL instances really perform. Transforming Your Application with MySQL 8.0 This session will focus on the business benefits of upgrading to MySQL 8.0.  There are many new features that makes MySQL 8.0 a must upgrade to version from ease of use, increase productivity, new SQL features, security and performance improvements and the ability to use the same database for both SQL and NoSQL. https://www.mysql.com/news-and-events/web-seminars/upgrading-to-mysql-8-0-2020/  

MySQL Day Virtual Event Join us on July 29th, 2020 (8AM - 1PM PST) for a virtual event about why and how to upgrade to MySQL 8.0.  Learn the key reasons you should upgrade to 8.0. Discover the best...

Live Webinar: Learn Credorax Best Security Practices with MySQL

Webinar date: July 8th, 2020 @9AM PST Webinar title: Learn How Credorax is Increasing its SaaS Application Security with MySQL EE.   Speakers: Mike Frank, MySQL Product Management Director Greg Paks, VP R&D and Operations IT  for Credorax Register now In this webinar, you will learn how Credorax leverages MySQL EE security features for their SaaS platform. Credorax’s SaaS application handles global and domestic domestic payments for merchants and partners, so security is a top priority for the company.  Discover how MySQL EE is used in the Credorax production environment to address the security challenges.  Learn Credorax best practices to protect their global operation across Europe, the US, the UK, Malta, Israel, Japan and China. This webinar will cover topics such as encryption, advanced authentication, auditing, database firewalls, and more. If you are a VP, CSO, CISO, or lead a line of business with concerns about security, join us and see how your teams can better secure MySQL. Credorax is a licensed Merchant Acquiring Bank, providing cross-border Smart Acquiring services to global merchants and payment service providers. Remaining true to their hi-tech roots and focus, Credorax is creating the next generation technology-driven banking solutions for the eCommerce arena. Credorax's Architecture Credorax's main payment processing engine uses MySQL at its core, for processing hundreds of transactions a second from customers globally. MySQL was chosen for it's flexibility and modular design allowing it to fit into a custom designed architecture meeting Credorax's specific throughput, availability and future scalability needs at a cost-effective price point. Credorax utilizes a custom-meshed, highly available MySQL Architecture which enables both high levels of uptime and spreading of the load for reporting and time delayed backup. Read more about how Credorax uses MySQL EE for its NextGen Payment Processing technology

Webinar date: July 8th, 2020 @9AM PST Webinar title: Learn How Credorax is Increasing its SaaS Application Security with MySQL EE.   Speakers: Mike Frank, MySQL Product Management Director Greg Paks, VP...

Interview + Webinar about SaaS Companies Using MySQL

In my interview with Shane Atherholt, Head of ISVs at MySQL, I asked him several questions: What is an ISV?  What is a SaaS application? How are ISVs using MySQL? We also went over the recent data breaches and new government regulations. I asked how MySQL helps ISVs secure their data and comply with the new regulations. I also asked Shane what were ISV CEOs and CIOs top priorities and why they should consider a relationship with Oracle. In conclusion, we talked about what we should look forward to in the future with MySQL. Watch the interview below. On May 27th, 2020, Shane Atherholt will hold a webinar focused on SaaS solutions for ISVs where he will go into more in depth detail about the MySQL offering for ISVs and SaaS. He will also cover the many MySQL Enterprise edition features that are designed to help ISVs with their SaaS applications so they can be more secure and in compliance. After the webinar, Shane will be available to answer your questions in a Q&A session. Register now to the webinar and prepare your questions! MySQL Enterprise Edition for SaaS Applications Wednesday, May 27, 2020 - 9AM Pacific Time Get a quick overview of the ISVs and SaaS applications using MySQL. Discover the different use cases and the many customer references. See what are the top priorities of ISV executives today. Learn how MySQL Enterprise edition will help secure your data and comply with the new regulations. Register to the webinar » Watch the interview with Shane Atherholt, Head of ISVs at MySQL: "use strict"; document.addEventListener('DOMContentLoaded', function(){if (window.hideYTActivated) return; let onYouTubeIframeAPIReadyCallbacks=[]; for (let playerWrap of document.querySelectorAll(".hytPlayerWrap")){let playerFrame=playerWrap.querySelector("iframe"); let tag=document.createElement('script'); tag.src="https://www.youtube.com/iframe_api"; let firstScriptTag=document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); let onPlayerStateChange=function(event){if (event.data==YT.PlayerState.ENDED){playerWrap.classList.add("ended");}else if (event.data==YT.PlayerState.PAUSED){playerWrap.classList.add("paused");}else if (event.data==YT.PlayerState.PLAYING){playerWrap.classList.remove("ended"); playerWrap.classList.remove("paused");}}; let player; onYouTubeIframeAPIReadyCallbacks.push(function(){player=new YT.Player(playerFrame,{events:{'onStateChange': onPlayerStateChange}});}); playerWrap.addEventListener("click", function(){let playerState=player.getPlayerState(); if (playerState==YT.PlayerState.ENDED){player.seekTo(0);}else if (playerState==YT.PlayerState.PAUSED){player.playVideo();}});}window.onYouTubeIframeAPIReady=function(){for (let callback of onYouTubeIframeAPIReadyCallbacks){callback();}}; window.hideYTActivated=true;}); .hytPlayerWrap{display: inline-block; position: relative;}.hytPlayerWrap.ended::after{content:""; position: absolute; top: 0; left: 0; bottom: 0; right: 0; cursor: pointer; background-color: black; background-repeat: no-repeat; background-position: center; background-size: 64px 64px; background-image: url(data:image/svg+xml;utf8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxMjgiIGhlaWdodD0iMTI4IiB2aWV3Qm94PSIwIDAgNTEwIDUxMCI+PHBhdGggZD0iTTI1NSAxMDJWMEwxMjcuNSAxMjcuNSAyNTUgMjU1VjE1M2M4NC4xNSAwIDE1MyA2OC44NSAxNTMgMTUzcy02OC44NSAxNTMtMTUzIDE1My0xNTMtNjguODUtMTUzLTE1M0g1MWMwIDExMi4yIDkxLjggMjA0IDIwNCAyMDRzMjA0LTkxLjggMjA0LTIwNC05MS44LTIwNC0yMDQtMjA0eiIgZmlsbD0iI0ZGRiIvPjwvc3ZnPg==);}.hytPlayerWrap.paused::after{content:""; position: absolute; top: 70px; left: 0; bottom: 50px; right: 0; cursor: pointer; background-color: black; background-repeat: no-repeat; background-position: center; background-size: 40px 40px; background-image: url(data:image/svg+xml;utf8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZlcnNpb249IjEiIHdpZHRoPSIxNzA2LjY2NyIgaGVpZ2h0PSIxNzA2LjY2NyIgdmlld0JveD0iMCAwIDEyODAgMTI4MCI+PHBhdGggZD0iTTE1Ny42MzUgMi45ODRMMTI2MC45NzkgNjQwIDE1Ny42MzUgMTI3Ny4wMTZ6IiBmaWxsPSIjZmZmIi8+PC9zdmc+);}

In my interview with Shane Atherholt, Head of ISVs at MySQL, I asked him several questions: What is an ISV?  What is a SaaS application? How are ISVs using MySQL?We also went over the recent data...

Celebrating MySQL 25th Anniversary!

This week MySQL turns 25 - - - on May 23rd to be exact. Join us in celebrating 25 years of great success. Early on MySQL became the “M” in the LAMP stack that powered the open source revolution. It became and continues to be the most popular open source database. Early this year it earned the DB-Engine award for most popular database. The most successful applications and websites we all know and use today, such as Twitter, Facebook, Booking but also Uber, Airbnb and many more, started with MySQL, grew up with MySQL and now scale with MySQL. Over the years it has become the database of choice for many of the most successful open source applications like WordPress, Drupal, Joomla, Magento, and many more.    It has always been ranked the developers’ favorite database because, as developers say, “it’s easy to use and simply just works.” The graphic above goes over some of the main product versions MySQL has had over the years. From its first release on May 23rd 1995 to the latest MySQL 8.0, it has morphed from a pure SQL database to the dual SQL+NoSQL for what has become NoSQL + SQL = MySQL. Today, the entire MySQL team would like to take this opportunity to thank all of you, our users and customers, for your continued support over the years.  MySQL would not be the product it is today without your support. Once again, THANK YOU for your support from all of us here at MySQL. We’re raising our glass of Champagne from home since most of us are still sheltering-in-place …the cake will be for later, when we no longer live in a virtual world.  Here’s to the NEXT 25 years!  

This week MySQL turns 25 - - - on May 23rd to be exact. Join us in celebrating 25 years of great success. Early on MySQL became the “M” in the LAMP stack that powered the open source revolution. It...

MySQL

3 Days 3 Webinars to Learn More about MySQL Document Store

The week of May 4th, 2020, we will have a series of 3 live webinars on the MySQL DocStore topic.  With MySQL 8.0, developers can now use the same MySQL database for their NoSQL and SQL applications. To learn more about the MySQL Document Store feature, join us the week of May 4th for a series of 3 webinars, 1 webinar each day for 3 days at 9AM Pacific Time. Our first webinar will be more business focused and will cover the business benefits of using the same database for both SQL and NoSQL. The second webinar will be more technical and will give you a nice technical introduction of the MySQL document store as well as step-by-step instructions on how to use it.   Our third webinar will also be for a technical audience with deeper insight on how to leverage the dual SQL and NoSQL features of MySQL.  You will discover how to turn unstructured JSON data temporarily into structured tables for use with SQL.     1. Tuesday, May 05, 2020 Benefits of using the same MySQL database for both SQL and NoSQL This webinar will cover the many business benefits of using the same database for SQL and NoSQL. We will talk about real customer use cases who use MySQL for both SQL and NoSQL in order to consolidate their data management. Register for this web presentation »   2. Wednesday, May 06, 2020 A Step by Step Introduction to the MySQL Document Store The MySQL Document Store is a NoSQL JSON document database that is easy to learn to use and to master. You do not need to normalize data, set up tables, or do much of the traditional work needed to use a relational database. Simple connect to the server and start saving your data. And since your data is schema-less it is very simple to change your data format on the fly. This webinar will allow show you in simple step-by-step instructions how to create a document collection, add your data, and then manage that data all without any Structured Query Language using the MySQL Shell. Register for this web presentation »   3. Thursday, May 07, 2020 Discover the Power of NoSQL + SQL with MySQL In addition to MySQL's great performance as a relational database it can also be used as a NoSQL JSON Document Store.  Your data is stored in an easy to read JSON format that is easily changeable as your needs change or your application evolves. This is all done without any Structured Query Language with an easy to use API that supports all popular programming languages.  Plus you can easily turn the unstructured JSON data temporarily into structured tables for use with SQL if needed and also access that JSON data from SQL as to provide you with the best of both the NoSQL and SQL worlds. Register for this web presentation »   Additional Resources Want to learn more about the MySQL Document Store?  Download our white paper which covers the Top 10 reasons you should use MySQL Document Store for your NoSQL needs.  Can't wait for the live webinars?  Check our on-demand webinars on the Document Store: Oracle MySQL Live Webcast: MySQL, NoSQL, JSON, JS, Python: Document Store  NoSQL+SQL=MySQL MySQL: NoSQL and SQL Together  Business Benefits of using both SQL and NoSQL with the Same Database  NoSQL + SQL = MySQL  NoSQL Development for MySQL Document Store using Java NoSQL Development for MySQL Document Store using PHP NoSQL Development for MySQL Document Store using Node.JS NoSQL Development for MySQL Document Store using .NET NoSQL Development for MySQL Document Store using Python

The week of May 4th, 2020, we will have a series of 3 live webinars on the MySQL DocStore topic.  With MySQL 8.0, developers can now use the same MySQL database for their NoSQL and SQL applications....

Announcements

Where's the MySQL Team from March 2020 to May 2020

As follow up to the regular shows announcements, we would like to inform you about places & shows where you can find MySQL Community team or MySQL experts at during March to May 2020 timeframe. Please find the details with a list of MySQL talks & booth details (if available at this time) below: March 2020: Southern California Linux Expo (ScaLE), Pasadena, US, March 5-8, 2020 MySQL talk: "MySQL New Features for Developers" by David Stokes, the MySQL Community Manager.  Timing: Friday, March 6, 2020 - 15:15 to 16:00 Room: 101 Booth: please find us in the expo area at MySQL booth FOSSASIA, Singapore, March 19-22, 2020 MySQL team is a sponsor of this show again this year, however since the current situation in APAC we had to cancel our physical attendance at this show. Even without our speakers there will be a MySQL track running since multiple MySQL talks were accepted. We look forward to meeting you at FOSSASIA next year! LOADays, Antwerp, Belgium, March 28-29, 2020 Talk (still TBC): "Don't spend time implementing complicated MySQL HA architecture use the right tools!" by Frederic Descamps, the MySQL Community Manager. No booths at LOADays at all April 2020: Midwest PHP, Minneapolis, MN, US, April 2-4, 2020 Talk: "MySQL 8.0 Features for Developers" by David Stokes, the MySQL Community Manager. Timing: Friday, April 3, 2020 - 13:00 to 13:40 Room: check the organizer's website Booth: please find MySQL booth in the expo area  OpenSource101, Austin, TX, April 14, 2020 Talk: "MySQL 8.0 New Features" given by David Stokes, the MySQL Community Manager. Timing: Tuesday, April 14, 2020 - 16:30 to 17:15 Room: check the organizer's website Booth: you can find MySQL booth in the expo area LinuxFest Northwest, Bellingham, US, Apr 24-26, 2020 Talk(s):  "New MySQL 8.0 Features" "MySQL Document Store or MySQL without the SQL -- Oh My!" "EZ MySQL Replication" -> all are given by David Stokes, the MySQL Community Manager Timing & Room: check organizer's website Booth: please find MySQL booth in the expo area Great Indian Developer Summit (GIDS), Bangalore, India, April 20-24, 2020 MySQL is again part of GIDS as a part of the Oracle booth in the expo area. Please come to talk to our experts there. May 2020: PyTexas, Austin, TX, May 16-17, 2020 This year MySQL is a Community sponsor with no booth. We have submitted several proposals for this show, the schedule will be announced during March 2020. Please watch organizer's website. PHPTek20, Atlanta, US, May 18-21, 2020 Please find MySQL at the MySQL booth in the expo area PerconaLIve, Austin, TX, US, May 18-20, 2020 As usual you can find MySQL at the PerconaLive. Since not all proposals were validated yet, you can find more of MySQL talks in the final schedule. Just now the accepted talks are as follows: Session(s):  Half Day tutorial: "MySQL InnoDB Cluster & MySQL Replicaset in a nutshell", by Frederic Descamps, the MySQL Community Manager & Kenny Gryp, the MySQL Principal Product Manager Keynote: "State of the Dolphin" by Frederic Descamps, the MySQL Community Manager Session: "MySQL Shell" - the best MySQL DBA tool? by Frederic Descamps, the MySQL Community Manager Session: "MySQL 8.0 Performance : Scalability & Benchmarks" by Dimitri Kravtchuk, the Oracle Performance Architect Session: "NDB Cluster 101" by Bernd Ocklin, the Senior Director, MySQL Cluster Engineering Booth: you can find MySQL team at the MySQL booth in the expo area Texas Linux Fest, Austin, TX, US, May 1-2, 2020 MySQL is as tradition part of this Texas Linux show. Multiple talks were submitted and we are waiting for an evaluation. Please watch organizers website for further updates. Booth: you can find MySQL at the MySQ booth in the expo area DOAG - Databanken, Dusseldorf, Germany, May 25-26, 2020 Talk: "MySQL Best Practices - Tuning fur Dummies" by Carsten Thalheimer, the MySQL Master Principal Sales Consultant Booth: you can find us at the MySQL booth in the expo area Please watch organizer's website for more talks which has not yet been validated OpenSource Day (2019 edition), Warsaw, Poland, May 19, 2020 - website to be updated This year again MySQL & Linux team are part of this show. You can find us at the shared booth in the expo area as well as find the MySQL & Linux talk in the schedule. OSC Nagoya, Japan, May 16, 2020 MySQL is as tradition part of this Open Source Conference in Japan, this time in Nagoya. Please watch organizer's website for the 45 min MySQL talk which is still under process. Booth: you can find us at the MySQL booth in the expo area

As follow up to the regular shows announcements, we would like to inform you about places & shows where you can find MySQL Community team or MySQL experts at during March to May 2020 timeframe. Please...

NEW PRODUCT RELEASE: MySQL 8.0.19

New Year 2020 and New Product Release.  The MySQL Development team just released the new MySQL 8.0.19 version.   The new generally available version can be downloaded at dev.mysql.com. In addition to bug fixes, a few new features made it into the release.  You can download 8.0.19 from dev.mysql.com or from the MySQL  Yum,  APT, or SUSE repositories. The source code is available at GitHub. You can find the full list of changes and bug fixes in the 8.0.19 Release Notes. Here are the highlights. Enjoy! Among what's new: New InnoDB ReplicaSet, and major improvements in Router, SQL, Optimizer, Information Schema, Security, InnoDB, Replication, X-Protocol MTR testsuite, and more. Read the feature details: https://mysqlserverteam.com/the-mysql-8-0-19-maintenance-release-is-generally-available/ Other great MySQL 8.0.19 technical blog references: The all new MySQL InnoDB ReplicaSet MySQL Shell 8.0.19 – What’s New? MySQL Shell AdminAPI – What’s new in 8.0.19? MySQL Connector/NET 8.0.19 MySQL Workbench 8.0.19 MySQL Connector/C++ 8.0.19 MySQL Shell 8.0.19 for MySQL Server 8.0 and 5.7 MySQL 8.0.19 Replication Enhancements MySQL Connector/Node.js 8.0.19 MySQL Connector/J 8.0.19  MySQL Connector/Python 8.0.19  MySQL Connector/ODBC 8.0.19  Announcing MySQL Server 8.0.19, 5.7.29 and 5.6.47        

New Year 2020 and New Product Release.  The MySQL Development team just released the new MySQL 8.0.19 version.   The new generally available version can be downloaded at dev.mysql.com. In addition to...

Announcements

MySQL Wins “DBMS of the Year” Award from DB-Engine

What a great way to start the New Year!  I woke up this morning to find in the news that MySQL won the DBMS of the year award for 2019. “MySQL is the database management system that gained more popularity in our DB-Engines Ranking within the last year than any of the other 350 monitored systems. We thus declare MySQL as the DBMS of the Year 2019”, says DB-Engine. Getting the highest popularity gain among over 350 monitored systems is really impressive but this could only happen because of you, the users of MySQL.  Thank you all for your ongoing support and your feedback to help us continue to make MySQL always better. As for the method used for determining the winner every year, DB-Engine explained it this way “For determining the DBMS of the year, we subtracted the popularity scores of January 2019 from the latest scores of January 2020. We use the difference of these numbers, rather than a percentage, because that would favor systems with a tiny popularity at the beginning of the year. The result is a list of DBMSs sorted by how much they managed to increase their popularity in 2019, or in other words, how many additional people started to communicate about it in one of the ways we measure in our methodology, for example job offers, professional profile entries and citations on the web.” For more information about DB-Engine’s DBMS of the year award, you can check their post at: https://db-engines.com/en/blog_post/83   Again, a BIG thank you to you all from the entire MySQL team!   And now is a good time to check out why #MySQL8isGreat! Here are the MySQL 8.0 Top 10 Features: 1.  Hash Joins 2.  EXPLAIN Analyze 3.  Window Functions 4.  CTEs 5.  InnoDB Cluster 6.  InnoDB Clone 7.  Shell 8.  Roles 9. Data Dictionary 10. Document Store   Learn More about MySQL 8.0 at: https://www.slideshare.net/lefred.descamps/benefits-to-upgrade-to-mysql-80  

What a great way to start the New Year!  I woke up this morning to find in the news that MySQL won the DBMS of the year award for 2019. “MySQL is the database management system that gained more...