Category Archives: Error

[Solved] Unapp H5 Error: Access to XMLHttpRequest at ‘http://www.localtest.com/api/api/v1/job/getPositionList’…

Error Messages:
Access to XMLHttpRequest at ‘http://www.localtest.com/api/api/v1/job/getPositionList’ from origin ‘http://localhost:8080’ has been blocked by CORS policy: Request header field os-version is not allowed by Access-Control-Allow-Headers in preflight response.

Original request interface

 

http://www.localtest.com/api/api/v1/job/getPositionList

Solution:

Configure cross-domain in the manifest.json file, the method is similar to devServer in vue.config.js

"h5" : {
        "devServer" : {
            "disableHostCheck" : true, // Enable you to use your own domain name 
            "proxy": {
              "/api": {
                "target": "http://www.localtest.com",
                "changeOrigin" : true,
                "secure" : false,
                "pathRewrite": { //api in the matching request path will be replaced with https: // www.test.com
                // Example: /api/api/user => https://www.localtest.com/api/user
                  "^/api": ""
                }
              }
            }
        }
    }

in addition

baseUrl = process.env.NODE_ENV === 'development' ?'/api' : 'https://www.localtest.com' 
Then the url requested by uni.request should be like this: 
baseUrl + '/user/info' 
then The request address seen by the browser should be http: http://localhost:8080/user/info


[Solved] Could not initialize class org.jetbrains.jps.builders.JpsBuildBundle

When I use idea to write the simplest Hello World demo. The environment is Azul Zulu consensus 16 aarch64 on MacBook Air M1 chip.

Looks like this idea version

IntelliJ IDEA 2020.3. 1 (Ultimate Edition)
Build #IU-203.6682. one hundred and sixty-eight

Incompatible with Java 16

IJ 2020.3.2. It is not supported until the beginning (https://youtrack.jetbrains.com/issue/IDEA-257630)

Solution:

Lazy can temporarily use Java 15;

However, I still suggest the latest version of idea, because I don’t know where there will be moths

Reference link: https://stackoverflow.com/questions/66770810/intellij-could-not-initialize-class-org-jetbrains-jps-builders-jpsbuildbundle

[Solved] error: aggregate value used where an integer was expected

If you want to convert a variable of custom struct type to unsigned, you can directly use cast. GCC (version 4.4.3) compilation will report an error

for example:

struct _test {
 unsigned hour : 5;
 unsigned minute : 6;
};

struct _test var = {5, 6}
printf("var = %x\n", (unsigned)var);

error: aggregate value used where an integer was expected

Solution:

printf("var = %x\n", *(unsigned *)&var);

[Solved] You cannot use the new command inside an Angular CLI project.

You cannot use the new command inside an Angular CLI project.
If you see these words you in your terminal, the best way to solve it is just remove package.json.

ls -a 
rm -rf package.json

Then have a try, you’ll find everything goes smoothly. Attached are the errors I miss.



If you see below images, congratulations to you.

[Solved] SparkException: Could not find CoarseGrainedScheduler or it has been stopped.

org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped.
        at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:163)
        at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:133)
        at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192)
        at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:356)
        at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:494)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:301)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120)
        at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142)
        at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
        at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
        at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:217)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
18/10/14 22:23:26 ERROR netty.Inbox: Ignoring error
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped.
        at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:163)
        at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:133)
        at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192)
        at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:356)
        at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:494)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:301)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120)
        at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142)
        at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
        at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
        at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:217)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

......

java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@708dfce7 rejected from java.util.concurrent.ThreadPoolExecutor@346be0ef[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 224]
        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
        at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:122)
        at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
        at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
        at scala.concurrent.Promise$class.complete(Promise.scala:55)
        at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
        at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
        at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
        at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
        at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
        at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
        at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
        at scala.concurrent.Promise$class.complete(Promise.scala:55)
        at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
        at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
        at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
        at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
        at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
        at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
        at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
        at scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
        at scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
        at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
        at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
        at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
        at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
        at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
        at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:208)
        at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$2.apply(NettyRpcEnv.scala:230)
        at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$2.apply(NettyRpcEnv.scala:230)
        at org.apache.spark.rpc.netty.RpcOutboxMessage.onFailure(Outbox.scala:71)
        at org.apache.spark.network.client.TransportResponseHandler.failOutstandingRequests(TransportResponseHandler.java:110)
        at org.apache.spark.network.client.TransportResponseHandler.channelUnregistered(TransportResponseHandler.java:124)
        at org.apache.spark.network.server.TransportChannelHandler.channelUnregistered(TransportChannelHandler.java:94)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelUnregistered(ChannelInboundHandlerAdapter.java:53)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:158)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:144)
        at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:739)
        at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:659)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:328)
        at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:627)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:362)

Solution:

  1. Add the configuration spark-submit ... --conf spark.dynamicAllocation.enabled=false when submitting the code
  2. You can also set SparkConf in the code: conf.set("spark.dynamicAllocation.enabled","false")

The following sections have been defined but have not been rendered for the layout page “~/views/shared/_layout.cshtml”: “footscript”

The error contents are as follows:

Method 1:

1. In the _Layout.cshtml layout body, add section, Scripts.Render and RenderSection tag sample code as follows:
<body class=”bodyBg font_fm”>
    <section>
        @RenderBody()
    </section>
    @Scripts.Render(“~/bundles/jquery”)
    @RenderSection(“scripts”, required: false)
</body>
2. In the content view to be used, use the section tag to put everything in the form in the section, you can
Method 2:

When searching on the Internet, there will be a lot of the first method, but I used it in my code but it was of no use.

After searching for a long time, I found the second one. Before using the second one, delete the first one, otherwise an error “Reuse RenderBody method” will be reported.

But the first one may be solved if some people use it, just because of the code.

ERROR in ch.qos.logback.core.joran.spi.Interpreter@73:41 – no applicable action for [AppenderRef], current ElementPath is [[Configuration][Loggers][Root][AppenderRef]]

1. Error reason

Springboot introduces rocketmq spring boot starter and reports an error

 

 

2. Solution – exclude dependencies

        <dependency>
            <groupId>org.apache.rocketmq</groupId>
            <artifactId>rocketmq-spring-boot-starter</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-starter-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

 

Project started successfully

 

[Solved] Error running ‘xyp’: Unable to open debugger port (127.0.0.1:56767): java.net.BindException “Address already in use: NET_Bind

When the web project is running, the idea may report error running ‘Tomcat’: unable to open debugger port (127.0.0.1:56767): Java net. Socketexception “socket closed” error, unable to start Tomcat
at this time, you need to find the occupied port and end the task!

Step 1: open the DOS command window. Enter netstat – ano or netstat – ano | find “56767”

Find the PID number corresponding to the port

Step 2: open task management (Ctrl + x)

Find the PID corresponding task “end task” in “details”

Perfect solution!

[Solved] SQL Server Error: Cannot drop database XXX because it is currently in use

I encountered this problem when using pymssql to connect to SQL Server.

pymssql.OperationalError: (3702, b'Cannot drop database "XXX" because it is currently in use.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
1
Programmer:

 	cursor = conn.cursor()
    conn.autocommit(True)
    cursor.execute('CREATE DATABASE XXX ON (NAME=\'XXX_Data\', FILENAME=\'{}\\XXX.mdf\', SIZE=5 MB, MAXSIZE=50 MB, FILEGROWTH=5%) LOG ON (NAME=\'XXX_Log\',FILENAME=\'{}\\XXX_Log.ldf\',SIZE=1 MB,MAXSIZE=5 MB,FILEGROWTH=1 MB)'.format(dir, dir))
    cursor.execute('USE XXX CREATE TABLE xxx ( xxxxxxx )')
    cursor.execute('DROP TABLE xxx')
    cursor.execute('USE XXX DROP DATABASE XXX')

I turned on autocommit before operating the database, and then found that if I performed other operations on the database between the CREATE DATABASE and DROP DATABASE statements, the above error would occur when DROP DATABASE.
After checking the relevant information, I found that the reason is that the database itself is used incorrectly when deleting the database: "USE XXX", in order to delete successfully, you need to change the USE to "USE MASTER": cursor.execute(cursor)

cursor.execute('USE MASTER DROP DATABASE XXX')
1
Then the deletion is successful.

Another method on the Internet is to add the following statement to roll back the database to the initial state, the actual test did not affect me, the reason is unknown.

use master
go
alter database database_name set single_user with rollback immediate 
 

Moved online, effective