[Solved] pydotplus generate iris.pdf error: InvocationException: GraphViz’s executables not found

error: InvocationException: GraphViz’s executables not found The
source code is as follows

from itertools import product

import numpy as np
import matplotlib.pyplot as plt

from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier


# Still using the iris data that comes with it
iris = datasets.load_iris()
X = iris.data[:, [0, 2]]
y = iris.target

# Training the model, limiting the maximum depth of the tree to 4
clf = DecisionTreeClassifier(max_depth=4)
#Fitting the model
clf.fit(X, y)


# draw
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
                     np.arange(y_min, y_max, 0.1))

Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.contourf(xx, yy, Z, alpha=0.4)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.8)
plt.show()

There is no problem up to here, then start generating the image of the spanning tree, here is the code

from IPython.display import Image  
from sklearn import tree
import pydotplus 
dot_data = tree.export_graphviz(clf, out_file=None, 
                         feature_names=iris.feature_names,  
                         class_names=iris.target_names,  
                         filled=True, rounded=True,  
                         special_characters=True)  
graph = pydotplus.graph_from_dot_data(dot_data)  
Image(graph.create_png())

I started to report errors . I
InvocationException: GraphViz's executables not found
learned through Baidu that the environment variables of graphviz were not configured properly,
but I didn’t know where my graphviz was installed,
so I used everything (a very useful software for finding files) to find my graphviz in the bin file. The location
and then edit the environment variables
and finally successfully run the code

[Solved] brew update Error: “fatal: Could not resolve HEAD to a revision”

brew update reports “fatal: Could not resolve HEAD to a revision”

When executing the brew update command:

% brew update
error: Not a valid ref: refs/remotes/origin/master
fatal: Could not resolve HEAD to a revision
Already up-to-date.

resolve
% brew update –verbose

% brew update -verbose
Checking if we need to fetch /opt/homebrew...
Checking if we need to fetch /opt/homebrew/Library/Taps/homebrew/homebrew-cask...
Fetching /opt/homebrew...
Checking if we need to fetch /opt/homebrew/Library/Taps/homebrew/homebrew-core...
Fetching /opt/homebrew/Library/Taps/homebrew/homebrew-core...
Fetching /opt/homebrew/Library/Taps/homebrew/homebrew-cask...
fatal: unable to access 'https://github.com/Homebrew/homebrew-cask/': Failed to connect to github.com port 443: Operation timed out
Error: Fetching /opt/homebrew/Library/Taps/homebrew/homebrew-cask failed!
Updating /opt/homebrew...
Branch 'master' set up to track remote branch 'master' from 'origin'.
Switched to and reset branch 'master'
Your branch is up to date with 'origin/master'.
Switched to and reset branch 'stable'
Current branch stable is up to date.

Updating /opt/homebrew/Library/Taps/homebrew/homebrew-core...
fatal: Could not resolve HEAD to a revision

Open the error path:

% cd /opt/homebrew/Library/Taps/homebrew/homebrew-core
% ls -al

total 0
drwxr-xr-x   3 [email protected]  admin   96  4 13 16:34 .
drwxr-xr-x   4 [email protected]  admin  128  4 14 11:31 ..
drwxr-xr-x  12 [email protected]  admin  384  4 14 11:44 .git

implement:

% git fetch --prune origin
% git pull --rebase origin master

From https://mirrors.ustc.edu.cn/homebrew-core
 * branch                  master     -> FETCH_HEAD

Execute after success

% brew update

Already up-to-date.

Then you can execute other commands normally

eg. % brew install rbenv ruby-build

[Solved] mujoco_py Run example Error: ERROR: GLEW initalization error: Missing GL version

After the successful installation of mujoco_py, run the example in the built-in example and find an error:  ERROR: GLEW initalization error: Missing GL version

 

 

 

 

Modify the configuration in .vimrc and add the following:

export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so

 

 

 

 

 

 

 

=================================================

 

 

 

 

All examples of mujoco_py below:

 

 

 

 

 

 

 

requires attention:

 

Run the example that comes with mujoco-py:

body_interaction.py disco_fetch.py ​​markers_demo.py render_callback.py setting_state.py tosser.py

Environment variables need to be set:

export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so

Otherwise, an error will be reported:

ERROR: GLEW initalization error: Missing GL version

 

 

Run the built-in example:

internal_functions.py multigpu_rendering.py

Environment variables need to be set:

export LD_PRELOAD=”

Otherwise, an error will be reported.

 

 

 

For a personal analysis of the environment variable export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so setting:

The mujuco211 version itself comes with the glew library, so when running mujuco-py to run the simulation, set export LD_PRELOAD=”

But when running the visual drawing, you need to call the glew library of the system. At this time, set export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so

If the glew library of the system is not called when drawing, a version error will be reported, and if the glew library of mujoco itself is not called when running the simulation, an error will also be reported.

 

 

 

 

The serialize_model.py substep_callback.py in the example does not need to set environment variables.

 

 

mjvive.py needs the support of VR SDK, etc., which is not considered here. (This should be run after you install HTC’s VR device client on your linux computer)

How to Solve setSupportActionBar() Method Error

In Android development, to use the ToolBar control to replace the ActionBar control, you need to use the setSupportActionBar() method in the java code, as follows:

1 Toolbar toolbar = (Toolbar) this.findViewById(R.id.toolBar);
2 this.setSupportActionBar(toolbar);

There are two common types of errors:

1. Method parameter error


This kind of error is because the wrong class is imported, put the following code

1 import android.widget.Toolbar;

Replace with the following code

1 import android.support.v7.widget.Toolbar;

 

2. Method name error

Need to inherit ActionBarActivity class or AppCompatActivity class.

Because the ActionBarActivity class is obsolete, it is recommended to inherit the AppCompatActivity class.

Note: If you inherit the AppCompatActivity class, you need to use the Theme.AppCompat.Light.NoActionBar theme, for example

<style name="AppTheme.Base" parent="Theme.AppCompat.Light.NoActionBar">
        <item name="android:windowNoTitle">true</item>
        <item name="android:windowActionBar">false</item>
</style>

[Solved] Windows 10 remote error: Oracle fix due to CredSSP encryption

Windows10 remote desktop connection error message:

 

I found a method on the Internet, but it is ” Win10 Home Edition ” that cannot use this method, the specific operation can be found in the reference link at the end!!!!

Policy Path: Computer Configuration -> Administrative Templates -> System -> Credential Assignment

Setting Name: Encrypt Oracle Fix

I can only change another kind of registry and change it for a long time and finally change it and post the detailed steps .

1. Open the registry and quickly enter “regedit” (similar to entering cmd at the command prompt)

2. Find the folder path: HKLM (abbreviation)\Software\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters

Probably after the System, there is no need to create a folder by yourself.

3. Then create a new DWORD (32) bit in the bottom folder.

Filename “AllowEncryptionOracle”, Value: 2.

Just save it.

4. If it doesn’t work, try restarting. I can use it without rebooting.

[Solved] jQuery Error: Uncaught ReferenceError: $ is not defined

When using jQuery, I found the following error:

Uncaught ReferenceError: $ is not defined (anonymous function)

The reason for this error:

1. The path of the jQuery library file is incorrect. Checking whether the file path is correct can usually solve the error.

2. If the path of the library file is correct, the order of loading the jQuery library file in the html may be wrong. If the jQuery library file is loaded in the first position, the error can be solved.

[Solved] Springboot Project mybatis Error: Invalid bound statement (not found)

There are many reasons for mybatis to report an error: Invalid bound statement (not found), but just like the error message, the sql statement in the xml cannot be found. There are three types of errors:

Type 1: Syntax error

Java DAO layer interface

public  void delete(@Param("id")String id);

Java corresponding mapper.xml file

<? xml version="1.0" encoding="UTF-8" ?> 
<! DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis -3-mapper.dtd" > 
< mapper namespace ="xxx.xxx.xxx.Mapper" > 
    <!-- delete data --> 
    < delete id ="delete" parameterType ="java.lang.String" >
        DELETE FROM xxx WHERE id=#{id}
    </ delete > 
</ mapper >

Check: 1. Whether the method name (delete) in the interface is consistent with the id=”delete” in the xml file

2. Whether the path in namespace=”xxx.xxx.xxx.Mapper” in the xml file is consistent with the path of the interface file

3. Whether parameterType and resultType are accurate; resultMap and resultType are different.

Second: compile error

Navigate to the project path: under the error path in target\classes\, find out whether the corresponding xml file exists.

(1) If there is no corresponding xml file, you need to add the following code to pom.xml:

< build > 
    < resources > 
         < resource > 
             < directory > src/main/java </ directory > 
             < excludes > 
                 < exclude > **/*.java </ exclude > 
             </ excludes > 
         </ resource > 
         < resource > 
             < directory > src/main/resources </ directory > 
             < includes > 
                 < include >**/*.* </include > 
             </ includes > 
        </ resource > 
    </ resources > 
</ build >

Delete the files in the classes folder, recompile, and the corresponding xml file appears.

(2) If there is an xml file, open the xml file and check whether the error part is consistent with the source file and inconsistent, then

First clear the files in the classes folder, execute the command: mvn clean to clean up the content, and then recompile.

The third type: configuration error

There was a problem with the configuration path when specifying scan packages in the project configuration file. For example: the specification of the “basePackage” property package name in the spring configuration file must be specific to the package where the interface is located, and do not write the parent or even higher-level package, otherwise problems may occur; cn.dao and cn.* may also cause errors ; Note that when scanning, the package may not be scanned.

Fourth: Referenced dependency package error

The project has always been running normally. When I recompiled the project one day, I found that this error has been reported all the time. I checked it again according to the previous three methods, and the error is still reported. Finally, I suddenly remembered that the company released a message a few days ago, saying that there are several internal dependency packages updated, because I wrote [xxx,) in the pom.xml file, and the latest dependency package will be automatically referenced. Then I modified the pom.xml file, returned the version to the old version, and after recompiling, the problem was solved.

[Solved] pip Install Error: is not a supported wheel on this platform

Possible reason 1: The installed library is not the corresponding python version. In the downloaded library name, cp27 represents python2.7, and the same is true for others.

Possible reason 2: This is the situation I encountered (downloaded the corresponding version of the library, and then still prompted that the current platform is not supported)

The filename of the numpy library I downloaded:

Install with pip (on the command line):

Error: *** is not a supported wheel on this platform, the problem was successfully solved by a post on stackoverflow.

Method: Enter import pip in the shell; print(pip.pep425tags.get_supported()) can get the file name and version supported by pip, here I am as follows:

>>import pip; print(pip.pep425tags.get_supported())[('cp27', 'none', 'win32'), ('py2', 'none', 'win32'), ('cp27', 'none', 'any'), ('cp2', 'none', 'any'), ('cp26', 'none', 'any'), ('cp25', 'none', 'any'), ('cp24', 'none', 'any'), ('cp23', 'none', 'any'), ('cp22', 'none', 'any'), ('cp21', 'none', 'any'), ('cp20', 'none', 'any'), ('py27', 'none', 'any'), ('py2', 'none', 'any'), ('py26', 'none', 'any'), ('py25', 'none', 'any'), ('py24', 'none', 'any'), ('py23', 'none', 'any'), ('py22', 'none', 'any'), ('py21', 'none', 'any'), ('py20', 'none', 'any')]

It can be found here that the file name format downloaded above is not supported, and it can be successfully installed by modifying it to: numpy-1.10.4+mkl-cp27-none-win32.whl.

Other libraries can also be successfully installed in the same way, but please also pay attention to the dependencies of the library.

Reference: http://stackoverflow.com/questions/28107123/cannot-install-numpy-from-wheel-format?rq=1

[Solved] CDH6.3.2 Hive on spark Error: is running beyond physical memory limits

Hue reports the following error when running hive sql

java.lang.IllegalStateException: Connection to remote Spark driver was lost

View the yarn error log as follows

Container [pid= 41355 ,containerID=container_1451456053773_0001_01_000002] is running beyond physical memory limits.
Current usage: 2.0 GB of 2 GB physical memory used; 5.2 GB of 4.2 GB virtual memory used. Killing container.

Probably the job run exceeded the memory size set by map and reduce, causing the task to fail. Adjustment increased the content of map and reduce, and the problem was eliminated. Some parameters are described as follows:

The memory resource configuration of RM is mainly carried out through the following two parameters (these two values ​​are characteristics of the Yarn platform and should be configured in yarn-site.xml):
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
Description: The minimum and maximum memory that a single container can apply for. The application cannot exceed the maximum value when running the application for memory. If it is less than the minimum value, the minimum value is allocated. From this point of view, the minimum value is somewhat similar to that in the operating system. Page. The minimum value has another purpose, calculating the maximum number of containers of a node Note: Once these two values ​​are set, they cannot be changed dynamically (the dynamic change mentioned here refers to the runtime of the application).

The memory resource configuration of NM is mainly carried out through the following two parameters (these two values ​​are the characteristics of Yarn platform and should be configured in yarn-sit.xml):
yarn.nodemanager.resource.memory-mb
yarn.nodemanager.vmem -pmem-ratio
Description: The maximum memory available per node, the two values ​​in RM should not exceed this value. This value can be used to calculate the maximum number of containers, that is: divide this value by the minimum container memory in the RM. The virtual memory rate is the percentage of the memory used by the task. The default value is 2.1 times; Note: The first parameter cannot be modified. Once set, it cannot be dynamically modified during the entire running process, and the default size of this value is 8G, even if If the computer memory is less than 8G, it will also be used according to the 8G memory.

The parameters related to AM memory configuration are described here by taking MapReduce as an example (these two values ​​are AM characteristics and should be configured in mapred-site.xml), as follows:
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb
Description: These two parameters specify the memory size of the two tasks (Map and Reduce tasks) used for MapReduce, and their value should be between the maximum and minimum containers in the RM. If there is no configuration, it can be obtained by the following simple formula:
max(MIN_CONTAINER_SIZE, (Total Available RAM) / containers))
The general reduce should be twice the map. Note: These two values ​​can be changed through parameters when the application starts;

other memory-related parameters in AM, as well as JVM-related parameters, can be configured through the following options:
mapreduce.map.java.opts
mapreduce.reduce .java.opts
Description: These two parameters are mainly prepared for running JVM programs (java, scala, etc.), and parameters can be passed to the JVM through these two settings. Memory-related, -Xmx, -Xms and other options. The size of this value should be between map.mb and reduce.mb in AM.

We summarize the above content. When configuring Yarn memory, we mainly configure the following three aspects: the physical memory limit available for each Map and Reduce; the JVM size limit for each task; the virtual memory limit;

the following Through a specific error example, the memory-related description is given. The error is as follows:
Container[pid=41884,containerID=container_1405950053048_0016_01_000284] is running beyond virtual memory limits. Current usage: 314.6 MB of 2.9 GB physical memory used; 8.7 GB of 6.2 GB virtual memory used. Killing container. The
configuration is as follows:

        <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value> 100000 </value>
        </property>
        <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value> 10000 </value>
        </property>
        <property>
            <name>yarn.scheduler.minimum-allocation-mb</name>
            <value>3 000 </value>
        </property>
       <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value> 2000 </value>
        </property>

Through the configuration, we can see that the minimum memory and maximum memory of the container are: 3000m and 10000m respectively, and the default value set by reduce is less than 2000m, and the map is not set, so both values ​​are 3000m, which is “2.9 GB physical” in the log
memory used”. Since the default virtual memory rate (that is, 2.1 times) is used, the total virtual memory for both Map Task and Reduce Task is 3000*2.1=6.2G. The virtual memory of the application exceeds this value, so an error is reported.
Solution : Adjust the virtual memory rate when starting Yarn or adjust the memory size when the application is running.

Summary: This solution is to modify the yarn.scheduler.minimum-allocation-mb parameter to 6000 and solve it.

[Solved] Tez Compression codec com.hadoop.compression.lzo.LzoCodec not found.

Error background

After installing Tez,

Execute under hive-shell: select count(*) from student;

report an error.

error phenomenon

22 / 03 / 03  22 : 34 : 48 INFO client.DAGClientImpl: DAG: State: FAILED Progress: 0 % TotalTasks: 1 Succeeded: 0 Running: 0 Failed: 0 Killed: 0 
22 / 03 / 03  22 : 34 : 48 INFO client.DAGClientImpl: DAG completed. FinalState= FAILED
 22 / 03 / 03  22 : 34 : 48Examples.OrderedWordCount the INFO: the DAG Diagnostics: [Vertex failed, the Tokenizer vertexName =, = vertex_1646363809353_0002_1_00 vertexId, Diagnostics = [Vertex vertex_1646363809353_0002_1_00 [the Tokenizer] killed / failed Due to: ROOT_INPUT_INIT_FAILURE, Vertex the Input: initializer of the Input failed, Vertex = vertex_1646363809353_0002_1_00 [the Tokenizer], java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java: 135 )
        at org.apache.hadoop.io.compress.CompressionCodecFactory. <init>(CompressionCodecFactory.java: 175 )
        at org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java: 58 )
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits( FileInputFormat.java:399 )
        at org.apache.hadoop.mapreduce.split.TezGroupedSplitsInputFormat.getSplits ( TezGroupedSplitsInputFormat.java : 97 )
        at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateNewSplits(MRInputHelpers.java: 448 )
        at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateInputSplitsToMem(MRInputHelpers.java: 329 )
        at org.apache.tez.mapreduce.common.MRInputAMSplitGenerator.initialize(MRInputAMSplitGenerator.java: 122 )
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$ 1.run ( RootInputInitializerManager.java:278 )
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$ 1.run ( RootInputInitializerManager.java:269 )
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java: 422 )
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java: 1924 )
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java: 269 )
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java: 253 )
        at java.util.concurrent.FutureTask.run(FutureTask.java: 266 )
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java: 1149 )
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 624 )
        at java.lang.Thread.run(Thread.java: 748 )
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java: 2255 )
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java: 128 )
        ... 18  more 
], Vertex killed, vertexName =Summation, vertexId=vertex_1646363809353_0002_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1646363809353_0002_1_01 [Summation] killed/failed due to:OTHER_VERTEX_FAILURE], Vertex killed, vertexName=Sorter, vertex vertexId=vertex_1646363809353_0002_1_02, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1646363809353_0002_1_02 [Sorter] killed/failed due to:OTHER_VERTEX_FAILURE], DAG did not succeed due to VERTEX_FAILURE. failedVertices: 1 killedVertices: 2 ]

Reason for error

Lzo related dependencies not found.

Error solution

Lzo will put its dependencies into hadoop during installation, and only need to load hadoop’s dependencies into tez.

Modify tez-site.xml and add the following configuration:

    < property > 
         < name > tez.use.cluster.hadoop-libs </ name > 
         < value > true </ value > 
    </ property >