Category Archives: MYSQL

[Warning] TIMESTAMP with implicit DEFAULT value is deprecated.

[Warning] TIMESTAMP with implicit DEFAULT value is deprecated.

 

During the installation of the free version of mysql it appears that:\java_software\mysql-5.6.21-winx64\mysql-5.6.21-winx64\bin>mysqld mysql
2017-04-01 10:44:04 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
Solution: You need to add

explicit_defaults_for_timestamp=true

 

Run mysqld install as an administrator to prompt that the installation is successful
start MySQL: Net start Mysql to start the MySQL service. For the convenience of future development, you can set automatic start. Win + R run services.msc settings
configure environment variables: MySQL_ Home/bin
Run MySQL: MySQL – U root – P password, which is empty by default

My.ini may not exist in the installation directory of MySQL, so it needs to be created by myself_ The default.ini is directly changed to my.ini

When using SQL(), dataset can’t use map, flatmap and other conversion operators

Abstract

We are using spark in a process: using spark. Sql() function to read data into memory to form dataset [row] (dataframe). Because row is a new spark dataset, automatic coding can not be realized, so we need to encode this dataset in order to use these operators for related operations. How to encode is a problem, Here is a summary of these issues. Error: unable to find encoder for type stored in a dataset. Primitive types (int, string, etc) and product types (case classes) are supported by importing spark.implies_ Support for serializing other types will be added in future releases.

This error is generally reported that when we use the operator, the data type of the return value is often not the automatic encoding part that spark can complete through its own reflection. For example, through the map operator, the return value type of the function of the map operator is map type, so the above problem will appear, because the map set class is not in the basic type and string, Within the scope of case class and tuple, spark can not automatically encode by reflection

The reason for this problem

The version after spark 2.0 uses a new distributed dataset, in which dataframe is the alias of dataset [row]. The new dataset uses a lot of optimizations, one of which is to use the computing engine of tungsten execution engine, which uses a lot of optimizations. One of them is to maintain a memory manager, which frees computing from Java JVM and improves memory optimization greatly. At the same time, the new computing engine stores the data in the memory in the form of binary. Most of the calculations are carried out on the binary data stream. There is no need to reverse sequence the binary data stream into Java objects, and then sequence the calculation results into binary data stream, but directly operate on the binary stream, In this case, we need to have a mechanism, that is, the mapping relationship between Java objects and binary data stream. Otherwise, we don’t know how many bytes the binary data stream corresponds to. The process of spark is completed by encoders. Spark itself completes part of the automatic encoding process by reflection: basic types and string, case class and tuple, For other collection types or our custom classes, he can’t complete such coding. We need to define such a code ourselves, that is, let it have a schema

How to solve this problem

Method 1

This is to convert it into RDD and use RDD for operation, but this is not recommended. Compared with RDD, dataset performs a lot of underlying optimization and has very good performance

val orderInfo1 = spark.sql(

 """

   |SELECT

   |o.id,

   |o.user_id

   |FROM default.api_order o

   |limit 100

 """.stripMargin).rdd.map(myfunction)

Method 2

Let it automatically convert dataset [row] to dataset [P], if there are complex types in row

case class Orders(id: String, user_id: String)


object a {

def main(args: Array[String]): Unit ={

import spark.implicits._

val orderInfo1 = spark.sql(

 """

   |SELECT

   |o.id,

   |o.user_id

   |FROM default.api_order o

   |limit 100

 """.stripMargin).as[Orders].map(myfunction)

}

}

Method 3:

Customize a schema, and then encode it with rowencoder. This is just an example. In fact, all types in it can automatically complete the encoding process through spark reflection

import spark.implicits._

val schema = StructType(StructType(Seq(StructField("id",StringType,true),StructField("user_id",StringType,true))))

val encoders = RowEncoder(schema)

val orderInfo1 = spark.sql(

 """

   |SELECT

   |o.id,

   |o.user_id

   |FROM default.api_order o

   |limit 100

 """.stripMargin).map(row => row)(encoders)

Method 4:

It is possible to directly use the strategy of schema matching in Scala, case row, because of the knowledge of schema matching in case row() Scala. In this way, we can know how many basic types there are in the set row, then we can automatically encode the row through Scala, and then we can carry out corresponding processing

import spark.implicits._

val orderInfo1 = spark.sql(

 """

   |SELECT

   |o.id,

   |o.user_id

   |FROM default.api_order o

   |limit 100

 """.stripMargin).map{case Row(id: String, user_id: String) => (id,user_id)}

This gets the schema as

orderInfo1: org.apache.spark.sql.Dataset[(String, String)] = [_1: string, _2: string]

If replaced with this.

val orderInfo1 = spark.sql(

 """

   |SELECT

   |o.id,

   |o.user_id

   |FROM default.api_order o

   |limit 100

 """.stripMargin).map{case Row(id: String, user_id: String) => List(id,user_id)}

The resulting schema is.

orderInfo1: org.apache.spark.sql.Dataset[List[String]] = [value: array<string>]

You can see: spark is to see the meta ancestor as a case class a special form of ownership, schame field name called _1,_2 such special case clase

MYSQL Developer Error:Unable to find a Java Virtual Machine

The 64 bit Oracle database and 32-bit Oracle client are installed. When the SQL developer of the client is opened for the first time in the start menu, I will be prompted to enter the path of java.exe. I choose the JDK path of the Oracle database. After confirming, I will report an error

Solution: because the JDK in the 64 bit Oracle database is 64 bit, but the installed Oracle client is 32-bit, so it is not compatible

There are generally two ways to solve the problem

1) Download Oracle SQL developer x64 from the Internet( http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html ), and then replace the original directory: D:: (APP/Oracle/product/11.1.0) dB_ 1. 32 bit Oracle SQL developer under sqldeveloper. In this way, restart Oracle SQL developer and make the path of java.exe
2) install JDK6 x86, which is 32-bit JDK. Although our system is 64 bit, it is also compatible with 32-bit JDK

Because of other needs, I have to use a 32-bit client, so I chose the second method and downloaded the 32-bit JDK from the Oracle official website to complete the installation. After that, modify the java.exe startup path of SQL developer. At this time, you can’t click SQL developer to define the java.exe path in the start menu, because the path has been initialized. Once you click SQL developer, you will report the above error. At this point, to modify the java.exe path, you need to modify the configuration file

1. Find the sqldeveloper.conf file in the following path

D:\app\mattran\product\11.2.0\client_ 1\sqldeveloper\sqldeveloper\bin

2. Open the file

3. Change the path at the red line to the path of the 32-bit JDK installed just now, as follows:

4. Save and exit

Then click Oracle – oraclient11g in the start menu_ SQL developer under home1 can be started successfully

Using SQLite to show undefined reference to ` SQLite3_ open’…

Assuming you have compiled and installed Sqlite correctly, write a test program to test.

#include <stdlib.h>

#include <stdio.h>

#include “sqlite3.h”

int main(void)

{

sqlite3 *db=NULL;

char *zErrMsg = 0;

int rc;

rc=sqlite3_open(“test1.db”,&db);

if(rc)

{

fprintf(stderr,”Can’t open database: %s\n”,sqlite3_errmsg(db));

sqlite3_close(db);

exit(1);

}

else printf(“open mydata successfully!\n”);

sqlite3_close(db);

return 0;

}

When compiling with GCC there is always an error, the compile command is as follows

gcc -static -o hello -lsqlite3 -L /usr/local/lib -I/usr/local/include hello.c

The error message is as follows

/tmp/ccKeKpX9.o(.text+0x37): In function `main’:

: undefined reference to `sqlite3_open’

/tmp/ccKeKpX9.o(.text+0x51): In function `main’:

: undefined reference to `sqlite3_errmsg’

/tmp/ccKeKpX9.o(.text+0x73): In function `main’:

: undefined reference to `sqlite3_close’

/tmp/ccKeKpX9.o(.text+0x9b): In function `main’:

: undefined reference to `sqlite3_close’

collect2: ld returned 1 exit status

The error is not in SQLITE nor in your program at all, but in GCC. the compilation parameters of Gcc are in order. The correct compile command is

gcc -o hello -L /usr/local/lib -I/usr/local/include -static hello.c -lsqlite3

Solve the problem of unknown column ‘password’ in ‘field list’ in MySQL

There is no password in MySQL 5.7, and it has been changed to authentication_ string

update mysql.user set authentication_string=password('root') where user='root' ;

The complete way to change MySQL password is as follows:

1.vim /etc/my.cnf [mysqld] add skip-grant-tables

2./etc/init.d/mysqld restart

3.Type mysql in the terminal to log in directly to the MySQL database, and then use mysql

4.update mysql.user set authentication_string=password('root') where user='root' ;

5.flush privileges;  

6.quit;

7.Edit /etc/my.cnf to remove skip-grant-tables, then restart MySQL /etc/init.d/mysqld restart

8. mysql -u root -p and enter the password to log in to the MySQL database

In the old version, the command to change the password is as follows: mysql5.7 password is changed to authentication_ string

mysql> update user set password=password(“new password”) where user=”username”;

used in key specification without a key length

The official explanation.

The error happens because MySQL can index only the first N chars of a BLOB or TEXT column. So The error mainly happen when there is a field/column type of TEXT or BLOB or those belongs to TEXT or BLOB types such as TINYBLOB, MEDIUMBLOB, LONGBLOB, TINYTEXT, MEDIUMTEXT, and LONGTEXT that you try to make as primary key or index. With full BLOB or TEXT without the length value, MySQL is unable to guarantee the uniqueness of the column as it’s of variable and dynamic size. So, when using BLOB or TEXT types as index, the value of N must be supplied so that MySQL can determine the key length. However, MySQL doesn’t support a key length limit on TEXT or BLOB. TEXT(88) simply won’t work.
The error will also pop up when you try to convert a table column from non-TEXT and non-BLOB type such as VARCHAR and ENUM into TEXT or BLOB type, with the column already been defined as unique constraints or index. The Alter Table SQL command will fail.
The solution to the problem is to remove the TEXT or BLOB column from the index or unique constraint, or set another field as primary key. If you can’t do that, and wanting to place a limit on the TEXT or BLOB column, try to use VARCHAR type and place a limit of length on it. By default, VARCHAR is limited to a maximum of 255 characters and its limit must be specified implicitly within a bracket right after its declaration, i.e VARCHAR(200) will limit it to 200 characters long only.
Sometimes, even though you don’t use TEXT or BLOB related type in your table, the Error 1170 may also appear. It happens in situation such as when you specify VARCHAR column as primary key, but wrongly set its length or characters size. VARCHAR can only accepts up to 256 characters, so anything such as VARCHAR(512) will force MySQL to auto-convert the VARCHAR(512) to a SMALLTEXT datatype, which subsequently fail with error 1170 on key length if the column is used as primary key or unique or non-unique index. To solve this problem, specify a figure less than 256 as the size for VARCHAR field.

What is the difference between utf8mb4 Unicode Ci and UTF8 general CI in MySQL database?

Utf8mb4 is four bytes. Utf8 is three bytes. Utf8mb4 has better compatibility and takes up more space

Mainly from the two aspects of sorting accuracy and performance

Accuracy
utf8mb4_ unicode_ Ci is based on the standard Unicode to sort and compare, and can be accurately sorted among various languages
utf8mb4_ general_ Ci does not implement Unicode collation. When some special languages or characters are encountered, the sorting result may not be expected

Performance
utf8mb4_ general_ Ci is faster in comparison and sorting
utf8mb4_ unicode_ Ci in special cases, in order to deal with special characters, Unicode sort rules implement a slightly complex sort algorithm
however, in most cases, such a complex comparison will not occur . In theory, general may be faster than Unicode, but compared with the current CPU, it is far from enough to be a factor to consider the performance. Index and SQL design are the most important factors. My personal recommendation is utf8mb4_ unicode_ Ci , it is very likely to use the default rules in 8.0 in the future. Users should pay more attention to the unification of character set and collation rules in DB than to which kind of collation to choose

Failed to obtain JDBC Connection

Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: The server time zone value ‘Öйú±ê׼ʱ¼ä’ is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the serverTimezone configuration property) to use a more specifc time zone value if you want to utilize time zone support.
My mysql is 5.7.25 maybe there is a problem with the installation mysql time zone is not set just set the time zone

 

spring.datasource.url=jdbc:mysql://127.0.0.1:3306/test?useUnicode=true&characterEncoding=UTF-8&serverTimezone=CST