-
Latest Version
-
Operating System
Windows 7 64 / Windows 8 64 / Windows 10 64 / Windows 11
-
User Rating
Click to vote -
Author / Product
-
Filename
mysql-workbench-community-8.0.29-winx64.msi
Sometimes latest versions of the software can cause issues when installed on older devices or devices running an older version of the operating system.
Software makers usually fix these issues but it can take them some time. What you can do in the meantime is to download and install an older version of MySQL Workbench 8.0.29.
For those interested in downloading the most recent release of MySQL Workbench or reading our review, simply click here.
All old versions distributed on our website are completely virus-free and available for download at no cost.
We would love to hear from you
If you have any questions or ideas that you want to share with us - head over to our Contact page and let us know. We value your feedback!
What's new in this version:
Audit Log Notes:
- The new audit_log_disable system variable permits disabling audit logging for all connecting and connected sessions. See Disabling Audit Logging.
C API Notes:
- Building C API client programs on Windows using Clang as the compiler returned various compiler warnings, including an unused variable warning.
Character Set Support:
- Incompatible Change: CONVERT(string USING charset) did not compute the correct maximum length for its return value, which should be the same as that calculated for CAST(string AS charset). This meant that some conversions of strings from BINARY to nonbinary character sets which should have been rejected as invalid returned values instead.
- Prior to upgrading, applications that may rely on the previous CONVERT() behavior should be checked and updated as necessary. In particular, for indexes on generated columns using CONVERT() with invalid values, you should correct such values, drop the index, then re-create it before upgrading to this release. In some cases, it may be simpler to rebuild the table using ALTER TABLE table FORCE, rather than dropping and re-creating the index. See SQL Changes, for more information.
- The output from EXPLAIN FORMAT=TREE hex-encoded ranges for multi-valued indexes, even when the data type was not a binary one. In addition, ranges using string types were also printed with hex-encoded values, when they had a binary collation. The latter issue also affected regular indexes, but was more visible with multi-valued indexes, since these always use utf8mb4_0900_bin collation. Now, hex-encoding is used only for string types having a binary character set. Strings with non-binary character sets are now printed in EXPLAIN FORMAT=TREE output as plain text, with escaping for any special characters.
- For some functions, the resolved character set was not always the same as the character set of the first argument.
Compilation Notes:
- InnoDB: Compilation issues associated with the following MSVC++ level 4 compiler warnings were addressed: C4201, C4701, C4702, C4703, C4706. The compiler warnings, which were previously disabled, are now enabled.
- InnoDB: MSVC++ level 4 compiler warnings were enabled.
- InnoDB: An access violation occurred when building a debug version of MySQL using Visual Studio 2019 version 16.10 or version 16.11. The violation was due to an STL iterator bug.
- Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 7.80.0.
Data Type Notes:
- This release fixes the following two issues relating to date and time values:
- Inserting a CHAR value such as '12:00:00' into a DATE, DATETIME, or TIMESTAMP column raised the wrong error. In the case of a DATE column, this error was similar to Data truncation: Incorrect date value: '2012-00-00' for column 'd' at row 1. This occurred for both the binary and text protocols.
- Inserting a value with an offset into a DATE or TIME column using the binary protocol gave a wrong result. For example, when the connection time zone was set to GMT-5, inserting '2021-10-10 00:00:00.123+01:00' into a TIME column yielded '18:00:00'; that is, the value was converted to the connection time zone (this should be done only with respect to DATETIME columns).
- We fix these by recognizing and adjusting for time zone offsets whenever a TIMESTAMP value with a time zone offset is inserted into a TIME or DATE column.
- References: See also: 44.
- A fix in MySQL 8.0.27 for a previous issue changed the resolved type of a boolean expression from signed INT to unsigned BIGINT in order to simplify type handling, but what appeared then as a harmless metadata change turned out to cause problems for some of the MySQL Connectors.
- We now revert the metadata change and provide a different fix for the original problem, by adjusting the max_length for the negation of an integer to at least two characters.
- References: This issue is a regression of: 10.
- Sorts of some column types, including JSON and TEXT, sometimes exhausted the sort buffer if its size was not at least 15 times that of the largest row in the sort. Now the sort buffer need only be only 15 times as large as the largest sort key.
- References: This issue is a regression of:
- Deprecation and Removal Notes
- The shortcuts ASCII for CHARACTER SET latin1 and UNICODE for CHARACTER SET ucs2 are now deprecated, and you should expect their removal in a future version of MySQL. Using either of these now raises a warning; use CHARACTER SET instead.
The character sets listed here, along with all of their collations, are now deprecated, and subject to removal in a subsequent release of MySQL:
- ucs2
- macroman and macce
- dec
- hp8
- The use of any of these character sets or their collations in SQL statements or elsewhere in the MySQL server now produces a deprecation warning.
- You should use utf8mb4 instead of any of the character sets just listed. See also The ucs2 Character Set (UCS-2 Unicode Encoding), West European Character Sets, and Central European Character Sets.
Event Scheduler Notes:
- As a previous convenience, the server automatically restarted the Event Scheduler as needed when the super_read_only system variable was disabled. Now this same convenience is applied independently when the read_only system variable is disabled. Prior to this update, disabling read_only could also disable super_read_only if needed, but because the code was separate, the Event Scheduler was not restarted.
Full-Text Search Notes:
- Due to the fact that, as implemented, MATCH() does not act as a function of its arguments, but rather as a function of the row ID of the current row in the underlying scan of the base table, a query using a rollup column as the argument to this function tended to perform poorly, and with unreliable results. This being the case, the use of a rollup column with MATCH() is no longer permitted whenever the following conditions are true:
- MATCH() appears in the SELECT list, GROUP BY clause, HAVING clause, or ORDER BY clause.
- The query uses GROUP BY ... WITH ROLLUP.
- A grouping column is used as the argument to MATCH().
- Any such queries are now rejected with ER_FULLTEXT_WITH_ROLLUP. (The use of MATCH() with a rollup column in the WHERE clause is not affected by this change, and is still permitted.)
- For more information, see Full-Text Search Functions.
SQL Function and Operator Notes:
- Important Change: When using prepared statements, the DATE_ADD() and DATE_SUB() functions returned DATETIME values, even when calculations involved combinations of YEAR, MONTH, or DAY parts only (that is, no time parts).
- Previous to implementing single preparation of prepared statements in MySQL 8.0.22, TIME values were returned in such cases; before this was done, values used as arguments and their types were used to determine the result type of certain temporal functions at resolution time, such as DATE_ADD(), DATE_SUB(), and ADDTIME(). Afterwards, user variable references and dynamic parameters are considered constant for the lifetime of one execution only, requiring that the return type be determined in another fashion, in this case from the function type. For example, the default resolved type for DATE_ADD() was deemed to be DATETIME if the first argument was a dynamic parameter, since DATETIME accomodates all temporal values and thus an implicit reprepare can be avoided.
- The change just described represents a regression; the problem is better solved by deriving a more precise resolved data type, and performing a reprepare only if that does not match the actual value of the parameter. (Such functionality was already in use in the MySQL server for numeric parameters.) This solution is implemented by this fix.
- We now parse string and numeric values when temporal values are expected. When a valid temporal value is found, the value is converted. This fix also improves determination of resolved data types for temporal functions.
- With this fix, the DATE_ADD() and DATE_SUB() functions (and their synonyms functions ADDDATE() and SUBDATE()) resolve types as follows:
- If the first argument is a dynamic parameter, its resolved type is DATE if the second argument is an interval that contains some combination of YEAR, MONTH, or DAY values only; otherwise, its type is DATETIME.
- If the first argument is resolved as DATETIME, the resolved type of the function is also DATETIME.
- If the first argument is a DATE, the resolved type of the function is also DATE, unless the interval argument uses HOUR, MINUTE, or SECOND, in which case it is DATETIME.
- If the first argument is a TIME value, the resolved type is also TIME, unless the interval argument uses any of YEAR, MONTH, or DAY, in which case the resolved type of the function is DATETIME.
- If none of the preceding conditions are met, the function is resolved as VARCHAR (as in MySQL 8.0.21 and earlier).
- The ADDTIME() and SUBTIME() functions now resolve types as follows:
- If the first argument is a dynamic parameter, the resolved type is TIME, rather than DATETIME.
- Otherwise, the resolved type of the function is derived from the resolved type of the first argument.
- In addition, for DATE_ADD() and DATE_SUB(), if the resolved type of the first argument is DATE, and a DATETIME value is provided, the statement is reprepared so that the function has the resolved type DATETIME. Behavior is unchanged when a TIME value is provided.
- For ADDTIME() and SUBTIME(), there are no forced reprepares.
- Previously, loadable functions and stored functions shared the same namespace and could not have the same name. A subsequent implementation change eliminated the reason to share the same namespace and permitted a stored function to be created with the same name as an existing loadable function. To invoke the stored function, it is necessary to qualify it with a schema name. The server now generates a warning if the stored function name collides with an existing loadable function name.
- Queries making use of the MBRContains() function did not employ all available spatial indexes.
- References: This issue is a regression of: 05.
- The FORMAT() function returned a formatted number without showing the thousands separator and grouping between separators when either the es_ES or es_MX locale was specified.
- The result length of the GROUP_CONCAT() function was wrong when the value of group_concat_max_len was increased. With a small group_concat_max_len value, the result was correct. This issue was caused by arithmetic overflow.
- Our thanks to Hope Lee for the contribution.
Optimizer Notes:
- For a query, a range scan can be picked first and the optimizer decide that the same range scan can be used to skip ordering. In some cases, when the requested order is not the same as the index order, a reverse index scan is required. If the requested ordering is on the key part that is not already used by the range scan, we update the number of used key parts for the range scan to reflect the change. The issue occurred because the key part information was not also updated, and when it was necessary to access key part information based on the number of key parts used.
- We also now note when a reverse scan uses extended key parts, and set the correct flags for evaluation accordingly.
- References: This issue is a regression of:
- In both optimizer trace and EXPLAIN FORMAT=TREE output, date ranges were printed as binary. Now in such cases we print temporal values as quoted strings.
- When a condition was pushed down, the result of evaluating assignments to user variables in the SELECT list of the subquery were sometimes affected. For this reason, we now prevent condition pushdown for statements with assignments to user variables.
- Our thanks to Casa Zhang and the Tencent team for the contribution.
- When the join buffer was set to certain arbitrary sizes, the number of chunk files created for hash joins was too small. This meant that each file contained more rows than could fit in the join buffer, so that the probe chunks needed to be read multiple times. This was caused by using integer division when computing how many chunk files are needed; we fix this by using floating-point division instead.
- Our thanks to Øystein Grøvlen for the contribution.
- A query using aggregation on a BIT type could return different results depending on the indexes or join type employed. This was due to the fact that a DML statement using such an aggregation caches the BIT values using an integer type, and later looks up and converts to a string format for output. The current issue arose because this lookup treated the BIT value as an integer, resulting in an incorrect string value.
- This is fixed by adding a new internal class for cached BIT values which can convert bit values to string formats correctly.
- Our thanks to Hope Lee for the contribution.
- When a DML statement containing an outer DISTINCT query with a subquery inside a HAVING clause, the inner subquery attempts to use a column reference for a column from the outer DISTINCT query, but this should be allowed only if the subquery is used somewhere outside of the HAVING, or if the outer SELECT does not use grouping. The current issue came about because such a query was allowed to run even though neither of these conditions were met.
- To fix this, the column reference check is expanded to detect an invalid column reference of this sort, and to return an error if it does.
- Our thanks to Song Zhibai for the contribution.
Packaging Notes:
- The GnuPG build key used to sign MySQL downloadable packages has been updated. The previous GnuPG build key is set to expire on 2022-02-16. For information about verifying the integrity and authenticity of MySQL downloadable packages using GnuPG signature checking, or to obtain a copy of our public GnuPG build key, see Signature Checking Using GnuPG.
- Due to the GnuPG key update, systems configured to use repo.mysql.com may report a signature verification error when upgrading to MySQL 5.7.37 and higher or to MySQL 8.0.28 and higher using apt or yum. Use one of the following methods to resolve this issue:
- Manually reinstall the MySQL APT or YUM repository setup package from https://dev.mysql.com/downloads/.
- Download the MySQL GnuPG public key and add it your system GPG keyring.
- For MySQL APT repository instructions, see Appendix A: Adding and Configuring the MySQL APT Repository Manually.
- For MySQL YUM repository instructions, see Upgrading MySQL with the MySQL Yum Repository.
-
Performance Schema Notes:
- A new statement metric, CPU_TIME, is now available, enabling you to measure the CPU time spent on a query.
- The following changes were made to support this:
- A timer, THREAD_CPU, was added to the Performance Schema PERFORMANCE_TIMERS table.
- A consumer, events_statements_cpu was added to the Performance Schema setup_consumers table.
- events_statements_cpu is disabled by default.
- A Performance Schema command option, performance-schema-consumer-events-statements-cpu to enable or disable the consumer, events_statements_cpu.
- See Performance Schema Command Options for more information.
The following columns were added:
- CPU_TIME column was added to the following Performance Schema tables:
- events_statements_current
- events_statements_history
- events_statements_history_long
- CPU_TIME is the time spent on CPU for the current thread, expressed in picoseconds.
- SUM_CPU_TIME column was added to the following Performance Schema tables:
- events_statements_summary_by_thread_by_event_name
- events_statements_summary_by_account_by_event_name
- events_statements_summary_by_user_by_event_name
- events_statements_summary_by_host_by_event_name
- events_statements_summary_global_by_event_name
- events_statements_summary_by_digest
- events_statements_summary_by_program
- prepared_statements_instances
- SUM_CPU_TIME is the CPU time spent on the current thread, expressed in picoseconds, for the corresponding aggregations.
- CPU_LATENCY column was added to the following sys schema tables:
- statement_analysis
- user_summary_by_statement_type
- user_summary_by_statement_latency
- host_summary_by_statement_type
- host_summary_by_statement_latency
- processlist
- session
- CPU latency is the CPU time spent on the current thread, expressed in human-readable form.
CPU_LATENCY column was added to the following sys schema x$ tables:
- x$statement_analysis
- x$user_summary_by_statement_type
- x$host_summary_by_statement_type
- x$processlist
- x$session
- x$user_summary_by_statement_latency
- x$host_summary_by_statement_latency
- CPU latency is the CPU time spent on the current thread, expressed in picoseconds.
- Our thanks to Facebook for their contribution to this fix. )
Functionality Added or Changed:
- Important Change: The number of distinct windows which can appear in a given SELECT is now limited to 127. The number of distinct windows is the sum of the named windows and the implicit windows specified as part of any window function's OVER clause.
- In order to use a large number of windows, it may be necessary to increase the value of the thread_stack server system variable.
- InnoDB: InnoDB now supports ALTER TABLE ... RENAME COLUMN operations using ALGORITHM=INSTANT.
- Operations that support ALGORITHM=INSTANT only modify metadata in the data dictionary. No exclusive metadata locks are taken on the table during preparation and execution phases of the operation, and table data is unaffected, making the operations instantaneous. If not specified explicitly, ALGORITHM=INSTANT is used by default by DDL operations that support it.
- For more information about this and other DDL operations that support ALGORITHM=INSTANT, see Online DDL Operations.
- It is theoretically possible for a user with sufficient permissions using MySQL Enterprise Audit to mistakenly create an “abort” item in the audit log filter that prevents themselves and other administrators from accessing the system. From MySQL 8.0.28, the AUDIT_ABORT_EXEMPT privilege is available to permit a user account’s queries to always be executed even if an “abort” item would block them. Accounts with this privilege can therefore be used to regain access to a system following an audit misconfiguration. The query is still logged in the audit log, but instead of being rejected, it is permitted due to the privilege.
- Accounts created in MySQL 8.0.28 or later with the SYSTEM_USER privilege have the AUDIT_ABORT_EXEMPT privilege assigned automatically when they are created. The AUDIT_ABORT_EXEMPT privilege is also assigned to existing accounts with the SYSTEM_USER privilege when you carry out an upgrade procedure with MySQL 8.0.28 or later, if no existing accounts have that privilege assigned.
- Support for the TLSv1 and TLSv1.1 connection protocols is removed as of MySQL 8.0.28. The protocols were deprecated from MySQL 8.0.26. For background, refer to the IETF memo Deprecating TLSv1.0 and TLSv1.1. Make connections using the more-secure TLSv1.2 and TLSv1.3 protocols. TLSv1.3 requires that both the MySQL Server software and the client application were compiled with OpenSSL 1.1.1 or higher.
- From MySQL 8.0.28, client programs, including MySQL Shell, that support a --tls-version option for specifying TLS protocols for connections to the MySQL server cannot make a TLS/SSL connection with the protocol set to TLSv1 or TLSv1.1. If a client attempts to connect using these protocols, for TCP connections, the connection fails, and an error is returned to the client. For socket connections, if –ssl-mode is set to REQUIRED, the connection fails, otherwise the connection is made but with TLS/SSL disabled.
- On the server side, the following settings are changed from MySQL 8.0.28:
- The default values of the server’s tls_version and admin_tls_version system variables no longer include TLSv1 and TLSv1.1.
- The default value of the Group Replication system variable group_replication_recovery_tls_version no longer includes TLSv1 and TLSv1.1.
- For asynchronous replication, replicas cannot set the protocol for connections to the source server to TLSv1 or TLSv1.1 (the SOURCE_TLS_VERSION option of the CHANGE REPLICATION SOURCE TO statement).
- The tmp_table_size variable now defines the maximum size of individual in-memory internal temporary tables created by the TempTable storage engine. An appropriate size limit prevents individual queries from consuming an inordinate amount global TempTable resources. See Internal Temporary Table Storage Engine.
- The innodb_open_files variable, which defines the number of files InnoDB can have open at one time, can now be set at runtime using a SELECT innodb_set_open_files_limit(N) statement. The statement executes a stored procedure that sets the new limit.
- To prevent non-LRU manged files from consuming the entire innodb_open_files limit, non-LRU managed files are now limited to 90 percent of the innodb_open_files limit, which reserves 10 percent of the innodb_open_files limit for LRU managed files.
- The innodb_open_files limit now includes temporary tablespace files, which were not counted toward the limit previously.
- The functions FROM_UNIXTIME(), UNIX_TIMESTAMP(), and CONVERT_TZ() now handle 64-bit values on platforms that support them, including 64-bit versions of Linux, MacOS, and Windows.
- On compatible platforms, FROM_UNIXTIME() now accepts a maximum argument of 32536771199.999999 seconds, corresponding to '3001-01-18 23:59:59.999999' UTC (including the optional fraction of up to 6 digits). If the argument is larger than this, the function returns NULL.
- On compatible platforms, UNIX_TIMESTAMP() now accepts a maximum value of '3001-01-18 23:59:59.999999' UTC, corresponding to 32536771199.999999 seconds since the Unix Epoch. If the argument is larger than this, the function returns 0.
- In addition, on compatible platforms, CONVERT_TZ() now performs time zone conversion beyond 2038, up to '3001-01-18 23:59:59.999999' UTC. If the datetime argument exceeds this value, the argument is returned unchanged. This “no-op” behavior is the same as previously with values beyond '2038-01-19 03:14:07.999999' UTC.
- The behavior of these 3 functions on 32-bit platforms is unchanged.
- The behavior of the TIMESTAMP type is also unaffected by this change; its maximum allowed value remains '2038-01-19 03:14:07.999999' UTC, regardless of platform. For dates futureward of this, use the MySQL DATETIME type instead.
- This release introduces monitoring and limiting of memory allocation on a global and per-user basis. You can now observe the total memory consumed by all user connections by checking the value of the Global_connection_memory status variable, which must be enabled by setting global_connection_memory_tracking = 1.
- The total includes memory used by system processes, or by the MySQL root user, although these users are not subject to disconnection due to memory usage.
- Memory used by the InnoDB buffer pool is not included in the total.
- You can control indirectly how often the status variable is updated by adjusting connection_memory_chunk_size; Global_connection_memory is updated only when the total memory usage varies by more than this amount.
- You can specify limits on resource consumption per user connection by setting connection_memory_limit; any user whose memory usage exceeds this amount cannot issue additional queries. You can also impose a global memory limit by setting global_connection_memory_limit. Whenever Global_connection_memory exceeds the global limit, no regular users can issue new queries requiring memory usage. System users such as MySQL root are not bound by these limits.
Fixed:
- InnoDB: The minimum I/O buffer size calculated during an index creation operation did not align with the I/O block size, permitting a record to exceed the buffer boundary.
- InnoDB: The sync_array_detect_deadlock algorithm used by the semaphore deadlock checker in debug builds was simplified in terms of code and time complexity, and an implementation of the algorithm was introduced for use in release builds.
- InnoDB: The ut::make_unique library function in the InnoDB sources now permits specifying the type of field allocated.
- InnoDB: A Performance Schema instrumentation was added for tracking redo log buffer memory allocations.
- InnoDB: Warnings printed to the error log for long semaphore waits did not provide information about the latch owner.
- InnoDB: A latch release and reacquisition mechanism was introduced to reduce the amount of time that threads spend in critical sections protected by a global lock system latch. 23)
- InnoDB: A hole punch operation on Windows caused a failure. The operation was performed as an overlapped (asynchronous) operation, which requires a OVERLAPPED structure containing a handle to an event object. The OVERLAPPED structure was not provided.
- InnoDB: The ut_time() infrastructure in the InnoDB sources was replaced with a type-checked standard library implementation.
- InnoDB: Numerous Trying to access missing tablespace errors were printed to the error log following a restore operation.
- InnoDB: Performance Schema aware ut::make_unique and ut::make_shared memory management library functions were added to the InnoDB sources. Similar functions (ut::make_unique_aligned and ut::make_shared_aligned) were added for types with extended alignment.
- InnoDB: The buf_validate() function in the InnoDB sources was optimized, improving performance on debug builds.
- Thanks to Hobert Lu for the contribution.
- InnoDB: On a NUMA-enabled system, the page size of memory chunks allocated to the buffer pool did not align with the system page size in certain scenarios, causing in the following error: Failed to set NUMA memory policy of buffer pool page frames to MPOL_INTERLEAVE.
- References: This issue is a regression of: 44.
- InnoDB: Two instances of std::unique_ptr with mem_heap in the InnoDB sources now use the Scoped_heap() wrapper, which uses a stateless function object instead of a pointer to a function.
- InnoDB: The m_end_range flag in the prebuilt struct, which is set to true when the end of the range is exceeded while populating the prefetch cache, was not set to false when the prefetch cache was reset (initialized). As a result, in cases where the end of the range is not exceeded and the handler is reused, the m_end_range flag could be set incorrectly the next time the prefect cache is used.
- InnoDB: Column metadata in the data dictionary was not updated when a new table ID was assigned to a table after discarding the table's tablespace during a table import operation.
- InnoDB: Setting the innodb_interpreter debug-only system variable to NULL caused a failure.
- InnoDB: Full-text index creation file management was improved.
- InnoDB: An update operation that inserted a new row into a temporary table used for aggregation caused the temporary table to be moved to disk and the update operation to be retried on the new on-disk temporary table. A BLOB pointer in the record data prepared before the temporary table was moved to disk was rendered stale, causing a failure.
- InnoDB: Memory allocation is now performed by a new standards-compliant custom memory allocator which is compatible with Performance Schema.
- InnoDB: A race condition between threads attempting to deinitialize and initialize statistics for the same table raised and assertion failure.
- InnoDB: An innodb_flush_log_at_trx_commit setting other than 1 or a long running transaction could have resulted in an inconsistent rate of redo log flushing.
- InnoDB: Allocation of large pages is now handled by a library designed to handle this. In cases where the large page allocation mechanism cannot be used, a fallback mechanism allocates regular aligned pages. Fallback can occur when large page address space is exhausted, when large page support is disabled by the underlying hardware or operating system architecture, or when large page support in MySQL is disabled explicitly (--large-pages=0). Occurrences of ut_allocator functions for allocation and deallocation of large pages have been replaced by the new library functions. 09, 01, 35)
- InnoDB: Handling of regular 4K page-aligned allocations is now performed by a self-contained library which is compatible with Performance Schema.
- InnoDB: Functions belonging to a new InnoDB library responsible for dynamic storage management of suitably aligned types has replaced the functions previously used for this purpose.
- InnoDB: Dynamic allocation of suitably aligned types is now handled by a library which is compatible with Performance Schema.
- InnoDB: While a purge thread was freeing LOB pages at end of a purge batch, a required index data structure was freed, causing a failure.
- InnoDB: Inconsistencies in Performance Schema instrumentation logic for dynamic memory management functions (ut_* functions) were addressed.
- InnoDB: InnoDB dynamic allocation routine limitations prevented dynamic allocation of an array of constructible types. The limitations have been addressed, permitting allocation of default constructible types, non-default constructible types, and types that are both default and non-default constructible.
- InnoDB: When using READ COMMITTED or READ UNCOMMITTED, certain queries executed on a table inside of a trigger or function prevented concurrent transactions on the same table. The acquired locks were too restrictive.
- InnoDB: Hole punch functionality was not working as expected for tablespace pages that were encrypted and compressed, for most pages on Windows, and for Windows volumes that do not implement Microsoft volume management functions.
- Partitioning: Creating a table with nondeterministic functions in generated column expressions should not be possible, but this was not enforced in all cases; a series of one or more ALTER TABLE statements could be employed to arrive at a partitioned table with one or more such generated columns. When attempting to execute the CREATE TABLE statement obtained by running SHOW CREATE TABLE against this table, MySQL rejected the statement with a misleading error message referring to the partitioning expression rather than to the problematic column, despite the fact that the partitioning expression itself was legal.
- This was caused by the result of a check for any unsafe expressions defined for a generated column (in the internal variable thd->safe_to_cache_query), which was later checked again without being cleared while parsing the partition expression, leading to an error even when the partition expression did not refer to the problematic generated column expression. Now in such cases, we reset thd->safe_to_cache_query before parsing the partition function.
- The issue of allowing the use of certain nondeterminstic functions (AES_ENCRYPT(), AES_DECRYPT(), RANDOM_BYTES()) in generated columns is handled separately.
- References: See also: 20.
- Partitioning: A query using an index other than the primary key of a partitioned table sometimes resulted in excessive CPU load.
- Replication: Group Replication could stop unexpectedly during the auto-rejoin procedure while the Performance Schema replication group member statistics table was being checked. The concurrency of the operations is now handled correctly.
- Replication: Group Replication’s selection process for a group member to be the donor for distributed recovery involves the use of the standard random selector defined for the operating system. If this random device was not available and an exception was thrown, the joining member’s joining process stopped. Group Replication now takes this possibility into account and uses a fallback random selector if the one on the operating system returns an error.
- Replication: A PURGE BINARY LOGS statement could be issued while the instance was locked for backup, which contravened the rules of the backup lock by removing files from the server. The statement now cannot be used while a LOCK INSTANCE FOR BACKUP statement is in effect.
- Replication: The STOP GROUP_REPLICATION statement stops asynchronous replication channels on the group member, but it does not implicitly commit transactions that are in progress on them like STOP REPLICA does. This is because on a Group Replication group member, an additional transaction committed during the shutdown operation would leave the member inconsistent with the group and cause an issue with rejoining. The server is left in a read-only state after the operation is completed. This situation leads to failed commits for transactions that are in progress while stopping Group Replication, so to avoid these, the STOP GROUP_REPLICATION statement now cannot be issued while a GTID is assigned as the value of the gtid_next system variable.
- Replication: An expelled group member that was rejoining a group using Group Replication’s auto-rejoin procedure reported its state as RECOVERING too early on, while it was still collecting information from other group members and before the compatibility checks were complete. The state change is now carried out while the view is being installed, which is the point where the rejoining member is actually accepted as a group member.
- Replication: Replication stopped with an error when reading a table map event if the name of the table or database was over 64 bytes – the limit is 64 characters, so the use of multi-byte characters could cause this situation. The replica now no longer checks the size of the table and database names, and supports the transmission of longer names. )
- Replication: A lock conflict could occur if the Performance Schema table replication_applier_status_by_worker was queried while a STOP REPLICA command was in progress. The issue has now been resolved.
- Replication: From MySQL 8.0.26, new versions of the plugins that implement semisynchronous replication are supplied to replace the terms “master” and “slave” with “source” and “replica”. Following this, the UNINSTALL PLUGIN statement incorrectly allowed the old versions of the plugins, rpl_semi_sync_master and rpl_semi_sync_slave, to be uninstalled when replication channels were currently using them. The issue has now been fixed.
- Replication: When the PAD_CHAR_TO_FULL_LENGTH SQL mode was enabled on a replica server, trailing spaces could be added to a replication channel’s name in the replication metadata repository tables, resulting in errors in replication operations that identified the channel using that data. The issue has now been fixed in MySQL 8.0 by using VARCHAR for character columns, and in MySQL 5.7 by disabling the SQL mode when reading from those tables. Thanks to Brian Yue for the contribution.
- Replication: If a replica was disconnecting while a SHOW REPLICAS statement was being issued, the server was able to access deleted data. )
- Replication: In Group Replication, if a SET gtid_next statement is used on a group member to set the GTID for the next transaction, it is possible for the same GTID to be used for a transaction that starts concurrently on another member. If both transactions reach the commit stage, the second one in the total order is rolled back, resolving the situation. However, when the transaction consistency level for Group Replication (the group_replication_consistency system variable) was set to BEFORE or BEFORE_AND_AFTER, members could reach a deadlock with one holding ownership of a GTID in the gtid_owned set, and another waiting for ownership to be released before committing the transaction. The wait function now only considers the GTIDs for committed transactions and not the GTIDs that are owned but not committed, except where a session owns a GTID that is concurrently committed, in which case the executing session errors out. )
- Replication: If a replica server with the system variable replica_preserve_commit_order = 1 set was used under intensive load for a long period, the instance could run out of commit order sequence tickets. Incorrect behavior after the maximum value was exceeded caused the applier to hang and the applier worker threads to wait indefinitely on the commit order queue. The commit order sequence ticket generator now wraps around correctly. Thanks to Zhai Weixiang for the contribution. )
- Replication: The group communication engine for Group Replication (XCom, a Paxos variant) now logs more information in the situation where the existing group members have difficulty in communicating with a joining member, for example due to network issues. This can result in the group remembering an old incarnation of the joining member and not allowing it to attempt to join again.
- Replication: Group Replication’s Group Communication System (GCS) now differentiates in its records of expelled members between those that had successfully joined the group, and those that never did manage to join the group.
- Replication: A race condition occurred if the Group Replication group member statistics in the Performance Schema were queried when Group Replication was being started or stopped.
- Replication: The replication receiver thread stopped with an error if the replication source server sent a heartbeat event containing a binary log file position that was above the 4GB offset, due to the large size of the binary log file. A new heartbeat event (Heartbeat_log_event_v2, log event type 41) that handles the larger value correctly has been added for use in this situation.
- Microsoft Windows: On Windows, added missing debug and test suite binaries for MySQL Server (commercial) and MySQL NDB Cluster (commercial and community).
- JSON: When the first argument passed to JSON_TABLE() was a row instead of a single JSON value, an assertion was raised while trying to evaluate the row expression. We fix this by raising an error during function resolution if the first argument is a row, so that the row expression itself is never evaluated.
- Using LPAD() or RPAD() in the expression for a generated column led to corrupted indexes on the parent table.
- References: See also: 30, 11.
- In some cases where warnings were issued, rows were missing from the results of aggregation using a temporary table.
- For openSUSE 15, added the libtirpc rpcgen build requirement in mysql.spec to now use TI-RPC.
- An UPDATE statement acting on multiple tables sometimes adds elements to a list each time it is executed. Elements were never removed from this list, which increased the memory footprint for each execution of the statement. We fix this by clearing the element list following execution.
- The size of the HOST column of the Performance Schema processlist table is increased from VARCHAR(255) to VARCHAR(261).
- A keyring migration failure due to an OpenSSL error raised an assertion. The SSL error state was not flushed from the thread's OpenSSL error queue.
- A process listing function call caused a failure.
- The commercial Debian server packages contained two testing plugins (component_test_page_track_component.so and component_test_global_priv_registration.so); they were moved to the separate and optional testing package.
- For Fedora, increased the release package version number from 1 to 10; this to help eliminate potential installation related problems with native Fedora dnf/yum packages of the same version.
- Increased compat level to 11 for Debian-based packages as the previous level of 9 is deprecated; and replaced calls to dh_systemd_enable + dh_systemd_start with dh_installsystemd to satisfy requirements for compatibility level 10+.
- A delete operation involving a full-text search query caused a failure.
- An improperly handled error caused a startup failure when using the keyring_okv plugin.
- For Debian, added a mysql-community-server dependency to the mysql-community-server-debug package so as to pull in all required packages needed by the debug build.
- For virtual generated columns of type DECIMAL, we now always store some data, so that we avoid undefined behavior when trying to convert the field buffer to a decimal value.
- MySQL now supports pushing a condition down to a derived table when an expression from the underlying derived table contains a variable set by a stored procedure.
- tls_version and admin_tls_version settings are now validated server startup.
- The admin_tls_version variable accepted an invalid value.
- If two or more deprecated system variables were persisted using a SET PERSIST statement, when the server was restarted, a deprecation warning was only logged for the first of the deprecated system variables.
- For an index range scan whose key parts are equal, the range is now shown as an equality. For example, a = 3 is now displayed, instead of 3 <= a <= 3 previous to this change.
- Replaced /var/run references with /run as /var/run usage is deprecated for tmpfiles.d configuration files. The symlink from /var/run to /run remains to keep current setups functional.
- Executing SHOW PROCESSLIST or accessing INFORMATION_SCHEMA.PROCESSLIST on a server with a specific configuration caused a failure.
- Added a mapping from ICU error code U_FILE_ACCESS_ERROR to the new MySQL error code ER_REGEXP_MISSING_FILE.
- A failed keyring function argument validation check caused a failure.
- Disabling the Group Replication plugin in a MySQL source distribution using the CMake option DWITH_GROUP_REPLICATION=0 did not disable applications and tests related to Group Replication, which caused them to build incorrectly.
- The index range scan iterator did not always increment the number of rows examined as expected.
- Enabling the create_admin_listener_thread system variable on the command line could cause a server exit during startup under specific error conditions.
- The SUBSTR() function did not always correctly handle errors raised when trying to evaluate its arguments.
- International Components for Unicode version 67 introduced a new implementation for X (match a grapheme cluster), which requires locale data not currently included with MySQL.
- This means that, when using the version of ICU bundled with MySQL, a query using X raises the error ER_REGEXP_MISSING_RESOURCE; when using ICU supplied by the system, we report ER_WARN_REGEXP_USING_DEFAULT as a Note.
- A full-text search query on a table stored in the BLACKHOLE storage engine where the chosen query plan used a full-text index scan caused an error instead of returning an empty result set.
- The LOCK_TIME returned by the performance schema was under evaluated, missing time spent in rows locks, and time spent when locking multiple tables. As of this release, LOCK_TIME accounts for:
- all the time waited on SQL TABLES
- all the time waited on DATA locks
- LOCK_TIME is now reported consistently in the slow log and the performance schema.
- A new option T for the mysql client prompt prints an asterisk (*) if the current session is inside a transaction block. You can use the option with the --prompt command-line option, in a MySQL option file, or with the MYSQL_PS1 environment variable. Thanks to Murakami Kohei for the contribution.
- Constant subqueries in RANGE INTERVAL expressions were not always handled correctly.
- Decimal expressions which evaluated as NULL were not always handled correctly.
- A user account that was granted a MySQL role with a global SELECT privilege was denied access to the mysql database. The user account's global privileges were not checked when the role was granted. )
- When setting up an Item_ref to a SELECT alias, its cached properties are copied (including whether it is part of a ROLLUP expression or not). However, these might not yet be correctly computed, so the computation should to be done first or the values could be wrong. Having the wrong value could cause certain expressions to be materialized in an intermediate step when they should not (because they contain ROLLUP expressions that are not ready for computation, but having the wrong value is unknown at this point). The issue is fixed by forcing cached values to be recomputed when an item is designated as a rollup item.
- An invalid comment string detected while upgrading a table from MySQL 5.7 to MySQL 8.0 caused the upgrade to fail with errors that did not provide sufficient contextual information.
- It was possible in some cases to create a generated column of type SERIAL, which is not allowed.
- See Numeric Data Type Syntax, and CREATE TABLE and Generated Columns, for more information
- Statements which commit a transaction implicitly or explicitly are not allowed inside a trigger or a stored function. Both CREATE TRIGGER and CREATE FUNCTION should report an error (ER_COMMIT_NOT_ALLOWED_IN_SF_OR_TRG) in this case, but did not correctly handle DROP TABLESPACE.
- A SHOW TABLE STATUS operation raised an assertion failure when run on a table defined with a very large AVG_ROW_LENGTH value.
- When calculating the maximum number of rows likely to be read from a scan, the intermediate result was a double that could become greater than the maximum allowed value for a 64-bit unsigned integer. This triggered undefined behavior when converting the intermediate double value to an integer, which in some cases could lead to assert failures.
- We fix this by clamping the result in the range [1, UINT64_MAX].
- Queries using both UNION and LIMIT 0 triggered an assert failure in debug builds.
References: This issue is a regression of:
- Renaming an event using ALTER EVENT ... RENAME TO did not delete the Performance Schema instrumentation for the original event.
- An SSL handshake assertion was raised on debug builds when using the thread pool plugin.
- Some prepared statements using either GROUP BY WITH ROLLUP or one or more window functions could be executed successfully only once.
- An error occurred for statements of the form INSERT INTO view VALUE(tuple) AS row_alias(id_list). When executing such statements, the server calls the internal function Sql_cmd_insert_base::prepare_values_table() in order to prepare the derived table created as a VALUES alias; this function populates Sql_cmd_insert_base.values_field_list with Item_field objects pointing to the fields the underlying table. When inserting into a view rather than a table, an expected real_item() transform, needed to map from an Item_view_ref referencing the view column to an Item_field representing the corresponding column in the underlying table, was not performed.
- Some multiply-nested subselects were not handled correctly, and could lead to an unplanned shutdown of the server
- Inspection of a session thread object during a SHOW PROCESSLIST operation and a concurrent change to the thread's security context resulted in a race condition
- In cases where there are no operations to be performed on the result of a UNION, the rows are streamed without storing them in a temporary table, although a placeholder for the temporary table still exists in the query block. Since this table is not instantiated, a check for estimates of the cost of reading the rows from the table while calculating the access cost and optimizing for range based access had unpredictable results.
- We fix this by skipping retrieval of such row estimates in the case of an uninstantiated temporary table
- A multi-table DELETE statement using common table expressions was not always handled correctly
- References: This issue is a regression of: 15
- If a CR_UNKNOWN_ERROR was to be sent to a client, an exception could occur
- SSL-related code was revised to avoid a potential memory leak
- In some cases, multiple-table UPDATE statements could block concurrent access.
- Keyring system variables that use an internal slot for complex settings no longer accept a setting of DEFAULT.
- The Timestamp column in the mysql.tables_priv and myql.columns_priv grant tables was set to a zero timestamp value ("0000-00-00 00:00:00") for GRANT and REVOKE operations, preventing a logical restore of the grant tables. As of MySQL 8.0.28, a valid start time value is written to the Timestamp column.
- If you have existing grant table records with zero timestamp values that are preventing a logical restore of the grant tables, a workaround is to update records in the grant tables or in the dump files, replacing zero timestamp values with CURRENT_TIMESTAMP.
- Thanks to Venkatesh Prasad Venugopal for the contribution.
- Producing a per-table dump using mysqldump in MySQL 5.7 and 8.0 requires a longer execution time compared to MySQL 5.6. This is because the information_schema.files table, which is queried for information on log file groups by mysqldump, contains information about InnoDB data files as well as NDB data files from MySQL 5.7. In MySQL 8.0, the issue has been fixed by rewriting the query to select only the appropriate data files. In MySQL 5.7, Information Schema tables do not have indexes, so a full table scan is still required.
- Keyring memory management was improved.
- Incorrect values for FORCE INDEX FOR GROUP BY could be set while saving and restoring the presence of FORCE INDEX hints within tables.
- If a query with the sql_buffer_result system variable enabled returned just one row, and an attempt was made to insert the result into a table, then an error in setting the output from the temporary table could produce a data exception.
- References: This issue is a regression of: 69.
- Resetting of the active slice was not performed in WindowIterator::Read() at the end the end of the input result set for windowing. This led to reading wrong values on reaching the ORDER BY sort, since the number of the active slice was still set to 1—that is, to the items being read from the input table—while the ORDER BY sorting stage needs to read the values after computation of any window functions. For this, it needs the active slice to be that of the last window's output table.
- We fix this by moving the resetting of the slice to the output slice immediately following the read, so that it is already set correctly when returning at the end of the input set and moving on to the ordering.
- Our thanks to Casa Zhang and the Tencent team for the contribution.
- Code inspection revealed use of strncpy() in the internal function set_parse_error_message() without making sure that the last byte of the buffer being copied into was a null byte. We fix this by using snprintf() instead of strncpy(); this ensures that the result is valid even if it is truncated.
- When executing prepared statements that activated a trigger created with the DEFINER clause (or a stored function), invoker privileges were used for checking table access instead of definer privileges. This, in turn, could cause privilege checks on tables used by the trigger or stored function to fail.
- When a singleton histogram is constructed, its cumulative frequency is calculated by adding frequencies of previous buckets with the current bucket; because a floating-point value was used for the accumulator, this sometimes led to accumulated float errors, with the final cumulative frequency fractionally greater than 1.0.
- This fix accumulates the frequency with an integer type instead, to avoid intermediate floating-point errors
- Our thanks to Casa Zhang and the Tencent team for the contribution
- Multi-valued indexes were not used for queries executed from within stored functions
An error occurred for an SQL statement having the form shown here:
- INSERT INTO target_table
- SELECT aggregate_expression, non_aggregate_expression
- FROM empty_table;
- This happened when the query plan used aggregation from a temporary table, and when non_aggregate_expression was constant during one execution of the query, but could vary between executions. Such an expression might, for example, include a function such as NOW() or USER(). This resulted in the temporary table getting a column for non_aggregate_expression, which is unnecessary, since all rows have the same value. In addition, if there were no rows, there was no legal value to insert into target_table, which is what actually triggered the error.
- We fix this by not using a temporary table column when non_aggregate_expression is const for the current execution.
- When executing a prepared statement that included values passed in as strings, MySQL attempted to parse them as integers and could return an error unrelated to the input value.
- After a recent change, dynamic parameter handling was refactored so that the derived data type for parameters was determined based on context. For example, in a comparison such as int_col = ?, the parameter was given the same type as the (integer) column it was compared to. To preserve compatibility with existing MySQL applications, if a decimal or float value was supplied as parameter, the statement was automatically reprepared with new type assigned to the parameter based on the actual value. This handling preserved compatibility for numeric parameters.
- However, if a string parameter was supplied, it was still interpreted as an integer (the resolved data type) and this behavior was not compatible with older MySQL versions that detected the actual type of the value. The consequences being that if int_col = ? is executed with the parameter value '1.7', only the integer part of the string was used, making the effective comparison int_col = 1.
- To fix the issue, now when a string parameter is supplied, the parameter is analyzed to determine if it is an integer, a decimal, or a float value and the actual data type of the parameter is updated accordingly. Later, the actual type is compared to the resolved type and if it is incompatible, the statement is reprepared with the new actual type. So, the previous statement now evaluates as int_col = 1.7 and the comparison evaluates using decimal numbers.
- OperaOpera 115.0 Build 5322.109 (64-bit)
- 4K Download4K Video Downloader+ 1.10.3 (64-bit)
- PhotoshopAdobe Photoshop CC 2025 26.2 (64-bit)
- OKXOKX - Buy Bitcoin or Ethereum
- iTop VPNiTop VPN 6.2.0 - Fast, Safe & Secure
- Premiere ProAdobe Premiere Pro CC 2025 25.1
- BlueStacksBlueStacks 10.41.642.1001
- Hero WarsHero Wars - Online Action Game
- TradingViewTradingView - Trusted by 60 Million Traders
- LockWiperiMyFone LockWiper (Android) 5.7.2
Comments and User Reviews