[Solved] passwd: Authentication token manipulation error

problem background

The following error is displayed when the root password is changed in the online environment:

[root]# passwd root
Changing password for user root.
passwd: Authentication token manipulation error

Solution

View /etc/passwd, /etc/shadow file properties

[root]# lsattr /etc/shadow
----i----------- /etc/shadow
[root]# lsattr /etc/passwd
----i----------- /etc/passwd

Notes:

  • A: Atime, which tells the system not to modify the last access time to this file.
  • S: Sync, once the application performs a write operation on the file, the system immediately writes the modified result to the disk.
  • a: Append Only, the system only allows appending data after this file, and does not allow any process to overwrite or truncate this file. If a directory has this attribute, the system will only allow files to be created and modified in this directory, not to delete any files.
  • b: Do not update the last access time of the file or directory.
  • c: Compress the file or directory and store it.
  • d: When the dump program is executed, the file or directory will not be backed up by dump.
  • D: Check for errors in the compressed file.
  • i: Immutable, the system does not allow any modification to this file. If the directory has this attribute, any process can only modify the files under the directory, and is not allowed to create and delete files.
  • s: The file is completely deleted and cannot be recovered, because it is deleted from the disk, and then fills the area where the file is located with 0.
  • u: When an application requests to delete the file, the system reserves its data blocks so that the file can be undeleted later to prevent accidental deletion of the file or directory.
  • t: The file system supports tail-merging.
  • X: The contents of the compressed file can be directly accessed.

 

Revoke the i file attributes of /etc/passwd and /etc/shadow

[root]# chattr -i /etc/shadow
[root]# chattr -i /etc/passwd

 

Check the /etc/passwd, /etc/shadow file properties again

[root]# lsattr /etc/shadow
---------------- /etc/shadow
[root]# lsattr /etc/passwd
---------------- /etc/passwd

Then, re-update the user password.

[Solved] xtrabackup: error: xb_load_tablespaces() failed with error code 57

Problem description: Some errors occur when running the xtrabackup backup script on the database

DB_version:mysql8.0.26

Xtrabackup: percona-xtrabackup-8.0.27-19-Linux-x86_64.glibc2.12.tar.gz

[root@orch2 scripts]# $xtrDir --defaults-file=$mysql_cnf --user=$mysql_user --password=$mysql_password --socket=$mysql_socket --compress --compress-threads= 2 --backup -- target-dir= $target_dir
xtrabackup: recognized server arguments: --datadir=/home/mysql/db_orch2/data --tmpdir=/home/mysql/db_orch2/tmp --log_bin=/home/mysql/db_orch2/binlog/orch2-bin --log- bin-index=/home/mysql/db_orch2/binlog/orch2-bin.index --server-id= 1330611 --innodb_open_files= 63000 --innodb_data_home_dir=/home/mysql/db_orch2/data --innodb_log_group_home_dir=/home/mysql /db_orch2/data --innodb_log_file_size=8G --innodb_log_files_in_group= 4 --innodb_undo_directory=/home/mysql/db_orch2/ulog --innodb_undo_tablespaces= 3 --innodb_flush_log_at_trx_commit= 2 --innodb_flush_method=O_DIRECT --innodb_io_capacity= 3000--innodb_buffer_pool_size=64G --innodb_log_buffer_size=32M --innodb_max_dirty_pages_pct= 85 --innodb_adaptive_hash_index= 1 --innodb_data_file_path=ibdata1:512M:autoextend --innodb_write_io_threads= 16 --innodb_read_io_threads = 16  
xtrabackup: recognized client arguments --password=* --socket=/home/mysql/db_orch2/mysql.sock --compress --compress-threads= 2 --backup= 1 --target-dir=/home/mysql/backup/ 13306 // 2022-04-06 
/root/percona-xtrabackup/bin/xtrabackup version 8.0 . 27 - 19 based on MySQL server 8.0 .27 Linux (x86_64) (revision id: 50dbc8dadda)
Can ' t locate Data/Dumper.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl / usr/lib64/perl5 /usr/share/perl5 .) at - line 749. 
BEGIN failed--compilation aborted at - line 749 .
 220406  12 : 26 : 14 Connecting to MySQL server host: localhost, user: root, password: set , port: not set , socket : /home/mysql/db_orch2/mysql.sock
Using server version 8.0 . 26 
220406  12 : 26 : 14 Executing LOCK INSTANCE FOR BACKUP...
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /home/mysql/db_orch2/ data
xtrabackup: open files limit requested 0 , set to 1024 
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = /home/mysql/db_orch2/ data
xtrabackup: innodb_data_file_path = ibdata1:512M:autoextend
xtrabackup: innodb_log_group_home_dir = /home/mysql/db_orch2/ data
xtrabackup: innodb_log_files_in_group = 4 
xtrabackup: innodb_log_file_size = 8589934592 
xtrabackup: using O_DIRECT
Number of pools: 1
xtrabackup: initialize_service_handles suceeded
220406  12 : 26 : 14 Connecting to MySQL server host: localhost, user: root, password: set , port: not set , socket: /home/mysql/db_orch2/mysql.sock
xtrabackup: Redo Log Archiving is not set up.
Starting to parse redo log at lsn = 1276785485862 
220406  12 : 26 : 14 >> log scanned up to ( 1276786464962 )
xtrabackup: Generating a list of tablespaces
xtrabackup: Generating a list of tablespaces
Scanning ' ./ ' 
Scanning ' /home/mysql/db_orch2/ulog/ ' 
Completed space ID check of 2 files.
Allocated tablespace ID 2198  for eomaqzy_data/app_source_rela, old maximum was 0 
220406  12 : 26 : 15 >> log scanned up to ( 1276786574060 )
Undo tablespace number 1 was being truncated when mysqld quit.
Cannot recover a truncated undo tablespace in read- only mode
 xtrabackup: error: xb_load_tablespaces() failed with error code 57

 

Check whether there are undo files in the data file directory, clean up the undo files and test the backup again

 

mv undo*.log /tmp

 

 

 

 

 

Error two:

 

 

xtrabackup: Can ' t create/write to file ' /home/mysql/backup/ 13306 / 2022-04-06 /eomaqzy_data/pl02_inv_meter_data.ibd.qp ' ( OS errno 24 - Too many open files) You 
are in / var /spool There are mails in /mail/root

 

 

OS file count unadjusted limit

[root@orch2 13306 ]# ulimit - n
 1024

[root@orch2 ~]# grep -i nofile /etc/security/limits.conf
#         - nofile - max number of open file descriptors

[root@orch2 ~]# vim /etc/security/limits.conf
 * hard nofile 65535 
* soft nofile 65535

Disconnect and log back in
[root@orch2 ~]# ulimit - n
 65535

 

back up again

+ echo =========================Run full backup beginning =================== =======
+ /root/percona-xtrabackup/bin/xtrabackup --defaults-file=/home/mysql/db_orch2/conf/orch2.cnf --user=root --password=XXXXX --socket=/home/mysql/db_orch2/ mysql.sock --compress --compress-threads= 2 --backup --target - dir=/home/mysql/backup/ 13306 / 2022-04-06 + 
echo =========== ============Run full backup 
finished successfully ======================================================================================================================================== There are mails in /mail/root

[Solved] Azure Function Enable Managed Identity and Powershell Funciton Report Error: ERROR: ManagedIdentityCredential authentication failed

Problem Description

Write a Powershell Function, log in to China Azure and get Azure AD User information, but found that [Error] ERROR: ManagedIdentityCredential authentication failed: An unexpected error occured while fetching the AAD Token. Please contact support with this provided Correlation IdStatus: 500 (Internal Server Error).

 

problem analysis

Analyze the cause of the error. This is because there is an error when logging in with Powershell. Considering that you are currently logged in to Azure in China, when you log in with Connect-AzAccount, you want to specify -Environment as AzureChinaCloud.

The PowerShell Function App automatically adds the profile.ps1 file to the root directory  . The default file content is:

# Azure Functions profile.ps1
#
# This profile.ps1 will get executed every "cold start" of your Function App. 
# "cold start" occurs when:
#
# * A Function App starts up for the very first time 
# * A Function App starts up after being de-allocated due to inactivity
#
# You can define helper functions, run commands, or specify environment variables 
# NOTE: any variables defined that are not environment variables will get reset after the first execution

# Authenticate with Azure PowerShell using MSI. 
# Remove this if you are not planning on using MSI or Azure PowerShell. 
if ( $env:MSI_SECRET ) {
    Disable -AzContextAutosave -Scope Process | Out- Null
    Connect -AzAccount- Identity
}

# Uncomment the next line to enable legacy AzureRm alias in Azure PowerShell. 
# Enable-AzureRmAlias

# You can also define functions or aliases that can be referenced in any of your PowerShell functions.
It can be seen that the default Connect-AzAccount -Identity does not specify Environment, so when Function runs, it will connect to Global Azure by default, so ManagedIdentityCredential authentication failed will appear.

PS : If Managed Identity is not enabled, $env:MSI_SECRET is False and the code in profile.ps1 will not be executed.

 

solution

On the Function App page, click App Service Editor, and modify the profile.ps1 file.

use

Connect-AzAccount -Environment AzureChinaCloud -Identity

replace

Connect-AzAccount-Identity

The screenshot of the operation is as follows:

After modification, go back to the Function –> Code + Test page, and the test problem disappears.

using namespace System.Net

# Input bindings are passed in via param block. 
param ( $Request , $TriggerMetadata )

# Write to the Azure Functions log stream. 
Write-Host " PowerShell HTTP trigger function processed a request. " 
Write -Host $env:MSI_SECRET 
# Interact with query parameters or the body of the request. 
$name = $Request .Query.Name
 if ( -not  $name ) {
     $name = $Request .Body.Name
}

$body = " This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response. "

if ( $name ) {
     $body = " Hello, $name. This HTTP triggered function executed successfully. "
}
# login in to azure china 
Connect-AzAccount -Environment AzureChinaCloud - identity
 # get User information 
Get-AzADUser -First 2 -Select 'City' - AppendSelected

# Associate values ​​to output bindings by calling 'Push-OutputBinding'.Push 
-OutputBinding -Name Response -Value ([HttpResponseContext]@ {
    StatusCode = [HttpStatusCode]:: OK
    Body = $body 
})

Note: In order for Connect-AzAccount to run successfully, you need to add ‘Az’ = ‘7.*’ in requirements.psd1, so that the instance of Function App installs the Az module. Of course, if you need other Powershell modules in Function, you can add them here.

# This file enables modules to be automatically managed by the Functions service. 
# See https://aka.ms/functionsmanageddependency for additional information.
#
@ {
     # For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'. 
    # To use the Az module in your function app, please uncomment the line below. 
    'Az' = '7. * '
}

[Solved] Application_Error not firing when customerrors = “On”

Question:

I have code in the global.asax file’s Application_Error event which executes when an error occurs and emails details of the error to myself.

void Application_Error(object sender, EventArgs e)
{
    var error = Server.GetLastError();

    if (error.Message != "Not Found")
    {
        // Send email here...
    }

}

This works fine when I’m running it in Visual Studio, however when I publish to our live server the Application_Error event does not fire.

After some testing I can get the Application_Error firing when I set customErrors="Off", however setting it back to customErrors="On" stops the event from firing again.

Can anyone suggest why Application_Error would not be firing when customErrors are enabled in the web.config?

 

Solution 1:

UPDATE
Since this answer does provide a solution, I will not edit it, but I have found a much cleaner way of solving this problem. See my other answer for details…

Original Answer:
I figured out why the Application_Error() method is not being invoked…

Global.asax.cs

public class MvcApplication : System.Web.HttpApplication
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute()); // this line is the culprit
    }
...
}

By default (when a new project is generated), an MVC application has some logic in the Global.asax.cs file. This logic is used for mapping routes and registering filters. By default, it only registers one filter: a HandleErrorAttribute filter. When customErrors are on (or through remote requests when it is set to RemoteOnly), the HandleErrorAttribute tells MVC to look for an Error view and it never calls the Application_Error() method. I couldn’t find documentation of this but it is explained in this answer on programmers.stackexchange.com.

To get the ApplicationError() method called for every unhandled exception, simple remove the line which registers the HandleErrorAttribute filter.

Now the problem is: How to configure the customErrors to get what you want…

The customErrors section defaults to redirectMode="ResponseRedirect". You can specify the defaultRedirect attribute to be a MVC route too. I created an ErrorController which was very simple and changed my web.config to look like this…

web.config

<customErrors mode="RemoteOnly" redirectMode="ResponseRedirect" defaultRedirect="~/Error">
  <error statusCode="404" redirect="~/Error/PageNotFound" />
</customErrors>

The problem with this solution is that it does a 302 redirect to your error URLs and then those pages respond with a 200 status code. This leads to Google indexing the error pages which is bad. It also isn’t very conformant to the HTTP spec. What I wanted to do was not redirect, and overrite the original response with my custom error views.

I tried to change redirectMode="ResponseRewrite". Unfortunately, this option does not support MVC routes, only static HTML pages or ASPX. I tried to use an static HTML page at first but the response code was still 200 but, at least it didn’t redirect. I then got an idea from this answer

I decided to give up on MVC for error handling. I created an Error.aspx and a PageNotFound.aspx. These pages were very simple but they had one piece of magic…

<script type="text/C#" runat="server">
    protected override void OnLoad(EventArgs e)
    {
        base.OnLoad(e);
        Response.StatusCode = (int) System.Net.HttpStatusCode.InternalServerError;
    }
</script>

This block tells the page to be served with the correct status code. Of course, on the PageNotFound.aspx page, I used HttpStatusCode.NotFound instead. I changed my web.config to look like this…

<customErrors mode="RemoteOnly" redirectMode="ResponseRewrite" defaultRedirect="~/Error.aspx">
  <error statusCode="404" redirect="~/PageNotFound.aspx" />
</customErrors>

It all worked perfectly!

Summary:

  • Remove the line: filters.Add(new HandleErrorAttribute());
  • Use Application_Error() method to log exceptions
  • Use customErrors with a ResponseRewrite, pointing at ASPX pages
  • Make the ASPX pages responsible for their own response status codes

There are a couple downsides I have noticed with this solution.

  • The ASPX pages can’t share any markup with Razor templates, I had to rewrite our website’s standard header and footer markup for a consistent look and feel.
  • The *.aspx pages can be accessed directly by hitting their URLs

There are work-arounds for these problems but I wasn’t concerned enough by them to do any extra work.

I hope this helps everyone!

 

Solution 2:

I solved this by creating an ExceptionFilter and logging the error there instead of Application_Error. All you need to do is add a call to in in RegisterGlobalFilters

log4netExceptionFilter.cs

using System
using System.Web.Mvc;

public class log4netExceptionFilter : IExceptionFilter
{
    public void OnException(ExceptionContext context)
    {
        Exception ex = context.Exception;
        if (!(ex is HttpException)) //ignore "file not found"
        {
            //Log error here
        }
    }
}

Global.asax.cs

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
    filters.Add(new log4netExceptionFilter()); //must be before HandleErrorAttribute
    filters.Add(new HandleErrorAttribute());
}

[Solved] MAVEN-COMPILER-PLUGIN Compile Error: FATAL ERROR: UNABLE TO FIND PACKAGE JAVA.LANG IN CLASSPATH OR BOOTCLASSPATH

When I used maven-compiler-plugin to add some environment variables, the codes is as following below. It was found that in the bootclasspath, two variables were separated by a semicolon, so an error was reported, as shown in the following figure.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.0</version>
    <configuration>
        <!-- 1.8 and 1.7 don't matter -->
        <source>1.7</source>
        <target>1.7</target>
        <compilerArguments>
            <!-- Do not write, only rt.jar by default -->
            <bootclasspath>${java.home}/lib/rt.jar;${java.home}/lib/jce.jar</bootclasspath>
        </compilerArguments>
    </configuration>
</plugin>

 

 

 

Solution

The replacement code is shown below, using ${path.separator}the semicolon instead. Because under windowsand linuxunder, need to use different delimiters. windowsUse a semicolon, linuxuse a colon.

 

<bootclasspath>${java.home}/lib/rt.jar${path.separator}${java.home}/lib/jce.jar</bootclasspath>

[Solved] Json.Net Error: Error getting value from ‘ScopeId’ on ‘System.Net.IPAddress’

The IPAddress class is not very friendly to serialization, as you’ve seen. Not only will it throw a SocketException if you try to access the ScopeID field for an IPv4 address, but it will also throw if you try to access the Address field directly for an IPv6 address.

To get around the exceptions, you will need a custom JsonConverter. A converter allows you to tell Json.Net exactly how you’d like it to serialize and/or deserialize a particular type of object. For an IPAddress, it seems the easiest way to get the data that satisfies everyone is simply to convert it to its string representation and back. We can do that in the converter. Here is how I would write it:

class IPAddressConverter : JsonConverter
{
    public override bool CanConvert(Type objectType)
    {
        return (objectType == typeof(IPAddress));
    }

    public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
    {
        writer.WriteValue(value.ToString());
    }

    public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
    {
        return IPAddress.Parse((string)reader.Value);
    }
}
Pretty straightforward, as these things go. But, this is not the end of the story. If you need to go round-trip with your IPEndPoint, then you will need a converter for it as well. Why? Because IPEndPoint does not contain a default constructor, so Json.Net will not know how to instantiate it. Fortunately, this converter is not difficult to write either:
class IPEndPointConverter : JsonConverter
{
    public override bool CanConvert(Type objectType)
    {
        return (objectType == typeof(IPEndPoint));
    }

    public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
    {
        IPEndPoint ep = (IPEndPoint)value;
        JObject jo = new JObject();
        jo.Add("Address", JToken.FromObject(ep.Address, serializer));
        jo.Add("Port", ep.Port);
        jo.WriteTo(writer);
    }

    public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
    {
        JObject jo = JObject.Load(reader);
        IPAddress address = jo["Address"].ToObject<IPAddress>(serializer);
        int port = (int)jo["Port"];
        return new IPEndPoint(address, port);
    }
}
So, now that we have the converters, how do we use them? Here is a simple example program that demonstrates. It first creates a couple of endpoints, serializes them to JSON using the custom converters, then immediately deserializes the JSON back into endpoints again using the same converters.
public class Program
{
    static void Main(string[] args)
    {
        var endpoints = new IPEndPoint[]
        {
            new IPEndPoint(IPAddress.Parse("8.8.4.4"), 53),
            new IPEndPoint(IPAddress.Parse("2001:db8::ff00:42:8329"), 81)
        };

        var settings = new JsonSerializerSettings();
        settings.Converters.Add(new IPAddressConverter());
        settings.Converters.Add(new IPEndPointConverter());
        settings.Formatting = Formatting.Indented;

        string json = JsonConvert.SerializeObject(endpoints, settings);
        Console.WriteLine(json);

        var endpoints2 = JsonConvert.DeserializeObject<IPEndPoint[]>(json, settings);

        foreach (IPEndPoint ep in endpoints2)
        {
            Console.WriteLine();
            Console.WriteLine("AddressFamily: " + ep.AddressFamily);
            Console.WriteLine("Address: " + ep.Address);
            Console.WriteLine("Port: " + ep.Port);
        }
    }
}

Here is the output:

[
  {
    "Address": "8.8.4.4",
    "Port": 53
  },
  {
    "Address": "2001:db8::ff00:42:8329",
    "Port": 81
  }
]

AddressFamily: InterNetwork
Address: 8.8.4.4
Port: 53

AddressFamily: InterNetworkV6
Address: 2001:db8::ff00:42:8329
Port: 81

Fiddle: https://dotnetfiddle.net/tK7NKY

  • Code of WriteJson can be simplified using JObject too. 
  • Performance impact of WriteJson and ReadJson can be improved by using the writer and reader objects, avoiding JObject allocation. I have submitted an edit to this very useful answer. 

[Solved] Error: ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol requested by serv

mysql reports Error: ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol requested by server; consider upgrading MySQL client

Cause: An error is reported due to the mysql8.0 encryption method.

 

Solution:

execute instruction

mysql -u root -p

123456

use mysql;

alter user ‘root’@’localhost’ identified with mysql_native_password by ‘123456’;

flush privileges;

Note: 123456 is my own password to connect to the database

[olved] flume Install Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty

Problems with flume installation:

Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
Error: Could not find or load main class org.apache.flume.tools.VersionInfo

 

Solution:

1. Check your own configuration path:

sudo vim /etc/profile

 

 

source /etc/profile

run again:

2. Flume and hive conflict

Solution:
Modify the Hbase configuration file hbas-env.sh to:
1. Comment out the line configuration of hbase.env.sh of hbase
# Extra Java CLASSPATH elements. Optional.
#export HBASE_CLASSPATH=/home/hadoop/hbase/ conf
2 , or change HBASE_CLASSPATH to JAVA_CLASSPATH, the configuration is as follows
# Extra Java CLASSPATH elements. Optional.
export JAVA_CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

Error resolving template [index], template might not exist or might not be accessible by any of the configured Template Resolvers

Error resolving template [index], template might not exist or might not be accessible by any of the configured Template Resolvers

Error parsing template [index], template may not exist or may not be accessible by any configured template parsers

The problem here is that the index is not found, I checked it, and the path of the index is wrong


@Controller
public class shrioController {
    @RequestMapping({"/","/index"})
    public String toIndex(Model model){
        model.addAttribute("msg","hello,Shiro");
        return "index";
    }
}

image
The index path here should be “/templates/index”

[Solved] Windows Error: WslRegisterDistribution failed with error: 0x80070050

I recently upgraded the windows10 system of the old computer, and found that the ubuntu20.04 installed in wsl2 cannot be started normally in the windows terminal (I used ubuntu20.04 as the default startup terminal before.)

Involved in reporting errors:

WslRegisterDistribution failed with error: 0x80070050

My thoughts:

The error is that the distribution of wsl cannot be registered, so you need to edit the distribution-related configuration items in the registry to solve the problem.

Solution:

Please backup (using the steps I mention in my previous answer) before trying this.
wsl --shutdown (from PowerShell or CMD)
In Windows, run the Registry Editor
Find \HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss
Find the key in there that has DistributionName of Ubuntu20.04LTS. Change the Ubuntu20.04LTS to Ubuntu-20.04.
In theory, that may fix the problem by changing the distribution name back to what it should be.