Category Archives: Coder

VScode: Running the contributed command: What to do when’extension.node-debug.startSession’ failed.

Description:

What to do if you are using VScode and get the error Running the contributed command:’extension.node-debug.startSession’ failed.

This symptom seems to be limited to mac.
In my case it happened on macOS 10.12.4 VScode 1.12.1.

Solution:

In each lunch setting item of launch.json,

“protocol”: “legacy”,

If you add, it will work.

Better solution

Since the specifications of launch.json have changed variously, it may be better to delete it once and recreate it with Auto.
I’ve made tests and about 30 kinds of lunch, so I haven’t done it yet …

[Solved] Segues initiated directly from view controllers must have an identifier

XCode 9.4.1 (9F2000)

The following warning occurred in Storyborad on iOS.

~/ProjectName/ProjectName/Base.lproj/Main.storyboard:
Segues initiated directly from view controllers must have an identifier

Segies requires an ID when attaching to another View Controller as it is from the View Controller.

The warning disappears when Segue is selected and the Identifier is set from the Attirbute inspector.

1.png

“If you use Segue, you need an ID, so give it a name.”

[Solved] Lazy loading NSBundle MobileCoreServices.framework…

Lazy loading NSBundle MobileCoreServices.framework,

Loaded MobileCoreServices.framework,

System group container for systemgroup.com.apple.configurationprofiles path is /Users/develop/Library/Developer/CoreSimulator/Devices/083C0102-C85F-463A-96F4-CA1B9AC7919D/data/Containers/Shared/SystemGroup/ systemgroup.com.apple.configurationprofiles

1. Open your project and find Scheme –> Edit Scheme–> Run–>environment variable (name) OS_ACTIVITY_MODE; Value $(DEBUG_ACTIVITY_MODE), as shown in the figure

2. Find your project and click project–> build settings, click + to add the setting name as DEBUG_ACTIVITY_MODE. Then click + in Debug, then select “Any iOS Simulator SDK”, and set the value to “default” as shown in the figure:

Done!

WPF TextBox Placeholder

<!-- lang: xml -->
<TextBox>
    <TextBox.Resources>
        <VisualBrush x:Key="HelpBrush" TileMode="None" Opacity="0.3" Stretch="None" AlignmentX="Left">
            <VisualBrush.Visual>
                <TextBlock FontStyle="Italic" Text="Please input your usename"/>
            </VisualBrush.Visual>
        </VisualBrush>
    </TextBox.Resources>
    <TextBox.Style>
        <Style TargetType="TextBox">
            <Style.Triggers>
                <Trigger Property="Text" Value="{x:Null}">
                    <Setter Property="Background" Value="{StaticResource HelpBrush}"/>
                </Trigger>
                <Trigger Property="Text" Value="">
                    <Setter Property="Background" Value="{StaticResource HelpBrush}"/>
                </Trigger>
            </Style.Triggers>
        </Style>
    </TextBox.Style>
</TextBox>

 

ACL permission control of zookeeper

Permission test

Create directory

[zk: localhost:2181(CONNECTED) 1] create /dlw "dlw"
Created /dlw

Check directory permissions

[zk: localhost:2181(CONNECTED) 3] getAcl /dlw
'world,'anyone
: cdrwa

Modify the ACL permission of the directory, which means to add accumula user to the/DLW directory. The MD5 hash code of the password is skvnzlriq19gnd7eldxgkg0esgw =, and R means read-only

[zk: localhost:2181(CONNECTED) 5] setAcl /dlw digest:accumulo:SkvnZlrIQ19GNd7eLDXGKg0Esgw=:r
cZxid = 0x30000003f
ctime = Mon Feb 05 16:47:14 CHOT 2018
mZxid = 0x30000003f
mtime = Mon Feb 05 16:47:14 CHOT 2018
pZxid = 0x30000003f
cversion = 0
dataVersion = 0
aclVersion = 1
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

Check the directory permissions again

[zk: localhost:2181(CONNECTED) 6] getAcl /dlw
'digest,'accumulo:SkvnZlrIQ19GNd7eLDXGKg0Esgw=
: r

It is found that the directory cannot be accessed because of insufficient permissions

[zk: localhost:2181(CONNECTED) 7] ls /dlw
Authentication is not valid : /dlw

Suddenly I found that although I knew the MD5 value of accumula user password, I didn’t know how much the password was. Then I couldn’t access the/DLW directory

At this time, you can use zookeeper’s ACL super administrator to operate

ACL super administrator of zookeeper

Modify zookeeper’s startup script

$ cd $ZOOKEEPER_HOME/bin
$ vi zkServer.sh

Add a line

SUPER_ACL="-Dzookeeper.DigestAuthenticationProvider.superDigest=super:xQJmxLMiHGwaqBvst5y6rkB6HQs="
super:xQJmxLMiHGwaqBvst5y6rkB6HQs=super:admin

Modify the startup command, find nohup, and add super_ Add ACL to start command

nohup $JAVA $ZOO_DATADIR_AUTOCREATE "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" \
    "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" "${SUPER_ACL}" \
    -cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" &> "$_ZOO_DAEMON_OUT" 2&>&1 < /dev/null &

Distribution zkServer.sh Go to other zookeeper nodes and restart zookeeper service

Log in again zkCli.sh , connect to super administrator, and you can operate/DLW

[zk: localhost:2181(CONNECTED) 14] addauth digest super:admin
[zk: localhost:2181(CONNECTED) 15] ls /dlw
[]

Change the ACL of the/DLW directory to the initial default

[zk: localhost:2181(CONNECTED) 23] setAcl /dlw world:anyone:crwda
cZxid = 0x30000003f
ctime = Mon Feb 05 16:47:14 CHOT 2018
mZxid = 0x30000003f
mtime = Mon Feb 05 16:47:14 CHOT 2018
pZxid = 0x30000003f
cversion = 0
dataVersion = 0
aclVersion = 2
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: localhost:2181(CONNECTED) 24] getAcl /dlw
'world,'anyone
: cdrwa

Authentication method of zookeeper

Digest: client side is verified by user name and password, such as user:password The password generation method of digest is the base64 form of SHA1 digest

Auth: no ID is used to represent any confirmed user.

IP: client is verified by IP address, such as 172.2.0.0/24

World: the fixed user is anyone, and the permission is open for all clients

Super: in this scheme case, the corresponding ID has super permissions and can do anything (cdrwa)

There are several types of perms in a node

Create allows create operations on child nodes

Read allows getchildren and GetData operations on this node

Write allows SetData operation on this node

Delete allows delete operations on child nodes

Admin allows setacl operation on this node

When setting ACL permissions, cdrwa is used as abbreviation

zlib.h:no such file or directory

error: zlib.h:no such file or directory

I went to see the wrong file. There was such a sentence “include”

No such document?Then give it.

Then I searched zlib. H and found that it belongs to zlib.

Go to the next zlib.

Zlib address: http://www.zlib.net/

And then

Zlib Standard Installation Guide:

zlib library files are placed into /usr/local/lib and zlib header files are placed

into /usr/local/include, by default.

build static libraries

…/zlib-1.2.1]# ./configure

…/zlib-1.2.1]# make test

…/zlib-1.2.1]# make install

build shared libraries

…/zlib-1.2.1]# make clean

…/zlib-1.2.1]# ./configure –shared

…/zlib-1.2.1]# make test

…/zlib-1.2.1]# make install

…/zlib-1.2.1]# cp zutil.h /usr/local/include

…/zlib-1.2.1]# cp zutil.c /usr/local/include

/usr/local/lib should now contain…

libz.a

libz.so -&> libz.so .1.2.1

libz.so .1 -&> libz.so .1.2.1

libz.so .1.2.1

/usr/local/include should now contain…

zconf.h

zlib.h

zutil.h

Optional zlib non standard installation instructions:

create the directory that will contain zlib

…/zlib-1.2.1]# mkdir /usr/local/zlib

follow the given procedure above, except

…/zlib-1.2.1]# ./configure –prefix=/usr/local/zlib

Update the run time linker

/etc/ ld.so.cache will need to be updated with the new zlib shared lib: libz.so .1.2.1

for standard zlib installation…

add /usr/local/lib to /etc/ ld.so.conf , if specified path is not present

/etc]# ldconfig

if zlib was installed with a prefix…

add /usr/local/zlib/lib to /etc/ ld.so.conf

/etc]# ldconfig

It’s loaded again. OK.

Talk about the high availability of Flink jobmanager

Preface

This paper mainly studies the high availability of Flink jobmanager

Configuration

flink- conf.yaml

high-availability: zookeeper
high-availability.zookeeper.quorum: zookeeper:2181
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /cluster_one # important: customize per cluster
high-availability.storageDir: file:///share

The optional values of high availability are none or zookeeper; high- availability.zookeeper.quorum Peers; high used to specify zookeeper- availability.zookeeper.path . root is used to specify the root node path in zookeeper; high- availability.cluster -ID is used to specify the name of the node of the current cluster, which is located at root Under node- availability.storageDir Specifies the storage path of the jobmanager metadata

Masters file

localhost:8081
localhost:8082

The masters file specifies the address of the jobmanager

HighAvailabilityMode

flink-runtime_ 2.11-1.7.1- sources.jar !/org/apache/flink/runtime/jobmanager/ HighAvailabilityMode.java

public enum HighAvailabilityMode {
	NONE(false),
	ZOOKEEPER(true),
	FACTORY_CLASS(true);

	private final boolean haActive;

	HighAvailabilityMode(boolean haActive) {
		this.haActive = haActive;
	}

	/**
	 * Return the configured {@link HighAvailabilityMode}.
	 *
	 * @param config The config to parse
	 * @return Configured recovery mode or {@link HighAvailabilityMode#NONE} if not
	 * configured.
	 */
	public static HighAvailabilityMode fromConfig(Configuration config) {
		String haMode = config.getValue(HighAvailabilityOptions.HA_MODE);

		if (haMode == null) {
			return HighAvailabilityMode.NONE;
		} else if (haMode.equalsIgnoreCase(ConfigConstants.DEFAULT_RECOVERY_MODE)) {
			// Map old default to new default
			return HighAvailabilityMode.NONE;
		} else {
			try {
				return HighAvailabilityMode.valueOf(haMode.toUpperCase());
			} catch (IllegalArgumentException e) {
				return FACTORY_CLASS;
			}
		}
	}

	/**
	 * Returns true if the defined recovery mode supports high availability.
	 *
	 * @param configuration Configuration which contains the recovery mode
	 * @return true if high availability is supported by the recovery mode, otherwise false
	 */
	public static boolean isHighAvailabilityModeActivated(Configuration configuration) {
		HighAvailabilityMode mode = fromConfig(configuration);
		return mode.haActive;
	}
}

Highavailabilitymode has three enumerations: none, zookeeper and factory_ Class; these enumerations have a property haactive to indicate whether highavailability is supported

HighAvailabilityOptions

flink-core-1.7.1- sources.jar !/org/apache/flink/configuration/Hig hAvailabilityOptions.java

@PublicEvolving
@ConfigGroups(groups = {
	@ConfigGroup(name = "HighAvailabilityZookeeper", keyPrefix = "high-availability.zookeeper")
})
public class HighAvailabilityOptions {

	// ------------------------------------------------------------------------
	//  Required High Availability Options
	// ------------------------------------------------------------------------

	/**
	 * Defines high-availability mode used for the cluster execution.
	 * A value of "NONE" signals no highly available setup.
	 * To enable high-availability, set this mode to "ZOOKEEPER".
	 * Can also be set to FQN of HighAvailability factory class.
	 */
	@Documentation.CommonOption(position = Documentation.CommonOption.POSITION_HIGH_AVAILABILITY)
	public static final ConfigOption<String&> HA_MODE =
			key("high-availability")
			.defaultValue("NONE")
			.withDeprecatedKeys("recovery.mode")
			.withDescription("Defines high-availability mode used for the cluster execution." +
				" To enable high-availability, set this mode to \"ZOOKEEPER\" or specify FQN of factory class.");

	/**
	 * The ID of the Flink cluster, used to separate multiple Flink clusters
	 * Needs to be set for standalone clusters, is automatically inferred in YARN and Mesos.
	 */
	public static final ConfigOption<String&> HA_CLUSTER_ID =
			key("high-availability.cluster-id")
			.defaultValue("/default")
			.withDeprecatedKeys("high-availability.zookeeper.path.namespace", "recovery.zookeeper.path.namespace")
			.withDescription("The ID of the Flink cluster, used to separate multiple Flink clusters from each other." +
				" Needs to be set for standalone clusters but is automatically inferred in YARN and Mesos.");

	/**
	 * File system path (URI) where Flink persists metadata in high-availability setups.
	 */
	@Documentation.CommonOption(position = Documentation.CommonOption.POSITION_HIGH_AVAILABILITY)
	public static final ConfigOption<String&> HA_STORAGE_PATH =
			key("high-availability.storageDir")
			.noDefaultValue()
			.withDeprecatedKeys("high-availability.zookeeper.storageDir", "recovery.zookeeper.storageDir")
			.withDescription("File system path (URI) where Flink persists metadata in high-availability setups.");

	// ------------------------------------------------------------------------
	//  Recovery Options
	// ------------------------------------------------------------------------

	/**
	 * Optional port (range) used by the job manager in high-availability mode.
	 */
	public static final ConfigOption<String&> HA_JOB_MANAGER_PORT_RANGE =
			key("high-availability.jobmanager.port")
			.defaultValue("0")
			.withDeprecatedKeys("recovery.jobmanager.port")
			.withDescription("Optional port (range) used by the job manager in high-availability mode.");

	/**
	 * The time before a JobManager after a fail over recovers the current jobs.
	 */
	public static final ConfigOption<String&> HA_JOB_DELAY =
			key("high-availability.job.delay")
			.noDefaultValue()
			.withDeprecatedKeys("recovery.job.delay")
			.withDescription("The time before a JobManager after a fail over recovers the current jobs.");

	// ------------------------------------------------------------------------
	//  ZooKeeper Options
	// ------------------------------------------------------------------------

	/**
	 * The ZooKeeper quorum to use, when running Flink in a high-availability mode with ZooKeeper.
	 */
	public static final ConfigOption<String&> HA_ZOOKEEPER_QUORUM =
			key("high-availability.zookeeper.quorum")
			.noDefaultValue()
			.withDeprecatedKeys("recovery.zookeeper.quorum")
			.withDescription("The ZooKeeper quorum to use, when running Flink in a high-availability mode with ZooKeeper.");

	/**
	 * The root path under which Flink stores its entries in ZooKeeper.
	 */
	public static final ConfigOption<String&> HA_ZOOKEEPER_ROOT =
			key("high-availability.zookeeper.path.root")
			.defaultValue("/flink")
			.withDeprecatedKeys("recovery.zookeeper.path.root")
			.withDescription("The root path under which Flink stores its entries in ZooKeeper.");

	public static final ConfigOption<String&> HA_ZOOKEEPER_LATCH_PATH =
			key("high-availability.zookeeper.path.latch")
			.defaultValue("/leaderlatch")
			.withDeprecatedKeys("recovery.zookeeper.path.latch")
			.withDescription("Defines the znode of the leader latch which is used to elect the leader.");

	/** ZooKeeper root path (ZNode) for job graphs. */
	public static final ConfigOption<String&> HA_ZOOKEEPER_JOBGRAPHS_PATH =
			key("high-availability.zookeeper.path.jobgraphs")
			.defaultValue("/jobgraphs")
			.withDeprecatedKeys("recovery.zookeeper.path.jobgraphs")
			.withDescription("ZooKeeper root path (ZNode) for job graphs");

	public static final ConfigOption<String&> HA_ZOOKEEPER_LEADER_PATH =
			key("high-availability.zookeeper.path.leader")
			.defaultValue("/leader")
			.withDeprecatedKeys("recovery.zookeeper.path.leader")
			.withDescription("Defines the znode of the leader which contains the URL to the leader and the current" +
				" leader session ID.");

	/** ZooKeeper root path (ZNode) for completed checkpoints. */
	public static final ConfigOption<String&> HA_ZOOKEEPER_CHECKPOINTS_PATH =
			key("high-availability.zookeeper.path.checkpoints")
			.defaultValue("/checkpoints")
			.withDeprecatedKeys("recovery.zookeeper.path.checkpoints")
			.withDescription("ZooKeeper root path (ZNode) for completed checkpoints.");

	/** ZooKeeper root path (ZNode) for checkpoint counters. */
	public static final ConfigOption<String&> HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATH =
			key("high-availability.zookeeper.path.checkpoint-counter")
			.defaultValue("/checkpoint-counter")
			.withDeprecatedKeys("recovery.zookeeper.path.checkpoint-counter")
			.withDescription("ZooKeeper root path (ZNode) for checkpoint counters.");

	/** ZooKeeper root path (ZNode) for Mesos workers. */
	@PublicEvolving
	public static final ConfigOption<String&> HA_ZOOKEEPER_MESOS_WORKERS_PATH =
			key("high-availability.zookeeper.path.mesos-workers")
			.defaultValue("/mesos-workers")
			.withDeprecatedKeys("recovery.zookeeper.path.mesos-workers")
			.withDescription(Description.builder()
				.text("The ZooKeeper root path for persisting the Mesos worker information.")
				.build());

	// ------------------------------------------------------------------------
	//  ZooKeeper Client Settings
	// ------------------------------------------------------------------------

	public static final ConfigOption<Integer&> ZOOKEEPER_SESSION_TIMEOUT =
			key("high-availability.zookeeper.client.session-timeout")
			.defaultValue(60000)
			.withDeprecatedKeys("recovery.zookeeper.client.session-timeout")
			.withDescription("Defines the session timeout for the ZooKeeper session in ms.");

	public static final ConfigOption<Integer&> ZOOKEEPER_CONNECTION_TIMEOUT =
			key("high-availability.zookeeper.client.connection-timeout")
			.defaultValue(15000)
			.withDeprecatedKeys("recovery.zookeeper.client.connection-timeout")
			.withDescription("Defines the connection timeout for ZooKeeper in ms.");

	public static final ConfigOption<Integer&> ZOOKEEPER_RETRY_WAIT =
			key("high-availability.zookeeper.client.retry-wait")
			.defaultValue(5000)
			.withDeprecatedKeys("recovery.zookeeper.client.retry-wait")
			.withDescription("Defines the pause between consecutive retries in ms.");

	public static final ConfigOption<Integer&> ZOOKEEPER_MAX_RETRY_ATTEMPTS =
			key("high-availability.zookeeper.client.max-retry-attempts")
			.defaultValue(3)
			.withDeprecatedKeys("recovery.zookeeper.client.max-retry-attempts")
			.withDescription("Defines the number of connection retries before the client gives up.");

	public static final ConfigOption<String&> ZOOKEEPER_RUNNING_JOB_REGISTRY_PATH =
			key("high-availability.zookeeper.path.running-registry")
			.defaultValue("/running_job_registry/");

	public static final ConfigOption<String&> ZOOKEEPER_CLIENT_ACL =
			key("high-availability.zookeeper.client.acl")
			.defaultValue("open")
			.withDescription("Defines the ACL (open|creator) to be configured on ZK node. The configuration value can be" +
				" set to “creator” if the ZooKeeper server configuration has the “authProvider” property mapped to use" +
				" SASLAuthenticationProvider and the cluster is configured to run in secure mode (Kerberos).");

	// ------------------------------------------------------------------------

	/** Not intended to be instantiated. */
	private HighAvailabilityOptions() {}
}

High availability options defines the prefix high- availability.zookeeper Configuration item for

HighAvailabilityServicesUtils

flink-runtime_ 2.11-1.7.1- sources.jar !/org/apache/flink/runtime/highavailability/HighAvail abilityServicesUtils.java

public class HighAvailabilityServicesUtils {

	public static HighAvailabilityServices createAvailableOrEmbeddedServices(
		Configuration config,
		Executor executor) throws Exception {
		HighAvailabilityMode highAvailabilityMode = LeaderRetrievalUtils.getRecoveryMode(config);

		switch (highAvailabilityMode) {
			case NONE:
				return new EmbeddedHaServices(executor);

			case ZOOKEEPER:
				BlobStoreService blobStoreService = BlobUtils.createBlobStoreFromConfig(config);

				return new ZooKeeperHaServices(
					ZooKeeperUtils.startCuratorFramework(config),
					executor,
					config,
					blobStoreService);

			case FACTORY_CLASS:
				return createCustomHAServices(config, executor);

			default:
				throw new Exception("High availability mode " + highAvailabilityMode + " is not supported.");
		}
	}

	public static HighAvailabilityServices createHighAvailabilityServices(
		Configuration configuration,
		Executor executor,
		AddressResolution addressResolution) throws Exception {

		HighAvailabilityMode highAvailabilityMode = LeaderRetrievalUtils.getRecoveryMode(configuration);

		switch (highAvailabilityMode) {
			case NONE:
				final Tuple2<String, Integer&> hostnamePort = getJobManagerAddress(configuration);

				final String jobManagerRpcUrl = AkkaRpcServiceUtils.getRpcUrl(
					hostnamePort.f0,
					hostnamePort.f1,
					JobMaster.JOB_MANAGER_NAME,
					addressResolution,
					configuration);
				final String resourceManagerRpcUrl = AkkaRpcServiceUtils.getRpcUrl(
					hostnamePort.f0,
					hostnamePort.f1,
					ResourceManager.RESOURCE_MANAGER_NAME,
					addressResolution,
					configuration);
				final String dispatcherRpcUrl = AkkaRpcServiceUtils.getRpcUrl(
					hostnamePort.f0,
					hostnamePort.f1,
					Dispatcher.DISPATCHER_NAME,
					addressResolution,
					configuration);

				final String address = checkNotNull(configuration.getString(RestOptions.ADDRESS),
					"%s must be set",
					RestOptions.ADDRESS.key());
				final int port = configuration.getInteger(RestOptions.PORT);
				final boolean enableSSL = SSLUtils.isRestSSLEnabled(configuration);
				final String protocol = enableSSL ?"https://" : "http://";

				return new StandaloneHaServices(
					resourceManagerRpcUrl,
					dispatcherRpcUrl,
					jobManagerRpcUrl,
					String.format("%s%s:%s", protocol, address, port));
			case ZOOKEEPER:
				BlobStoreService blobStoreService = BlobUtils.createBlobStoreFromConfig(configuration);

				return new ZooKeeperHaServices(
					ZooKeeperUtils.startCuratorFramework(configuration),
					executor,
					configuration,
					blobStoreService);

			case FACTORY_CLASS:
				return createCustomHAServices(configuration, executor);

			default:
				throw new Exception("Recovery mode " + highAvailabilityMode + " is not supported.");
		}
	}

	/**
	 * Returns the JobManager's hostname and port extracted from the given
	 * {@link Configuration}.
	 *
	 * @param configuration Configuration to extract the JobManager's address from
	 * @return The JobManager's hostname and port
	 * @throws ConfigurationException if the JobManager's address cannot be extracted from the configuration
	 */
	public static Tuple2<String, Integer&> getJobManagerAddress(Configuration configuration) throws ConfigurationException {

		final String hostname = configuration.getString(JobManagerOptions.ADDRESS);
		final int port = configuration.getInteger(JobManagerOptions.PORT);

		if (hostname == null) {
			throw new ConfigurationException("Config parameter '" + JobManagerOptions.ADDRESS +
				"' is missing (hostname/address of JobManager to connect to).");
		}

		if (port <= 0 || port &>= 65536) {
			throw new ConfigurationException("Invalid value for '" + JobManagerOptions.PORT +
				"' (port of the JobManager actor system) : " + port +
				".  it must be greater than 0 and less than 65536.");
		}

		return Tuple2.of(hostname, port);
	}

	private static HighAvailabilityServices createCustomHAServices(Configuration config, Executor executor) throws FlinkException {
		final ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
		final String haServicesClassName = config.getString(HighAvailabilityOptions.HA_MODE);

		final HighAvailabilityServicesFactory highAvailabilityServicesFactory;

		try {
			highAvailabilityServicesFactory = InstantiationUtil.instantiate(
				haServicesClassName,
				HighAvailabilityServicesFactory.class,
				classLoader);
		} catch (Exception e) {
			throw new FlinkException(
				String.format(
					"Could not instantiate the HighAvailabilityServicesFactory '%s'. Please make sure that this class is on your class path.",
					haServicesClassName),
				e);
		}

		try {
			return highAvailabilityServicesFactory.createHAServices(config, executor);
		} catch (Exception e) {
			throw new FlinkException(
				String.format(
					"Could not create the ha services from the instantiated HighAvailabilityServicesFactory %s.",
					haServicesClassName),
				e);
		}
	}

	/**
	 * Enum specifying whether address resolution should be tried or not when creating the
	 * {@link HighAvailabilityServices}.
	 */
	public enum AddressResolution {
		TRY_ADDRESS_RESOLUTION,
		NO_ADDRESS_RESOLUTION
	}
}

Highavailability services utils provides static methods for creating highavailability services. These methods include create available or embedded services, create highavailability services, and create custom ha services

The method of createavailableorembeddedservices is mainly used for flinkminicluster; the method of createhighavailabilityservices is mainly used for clusterentrypoint. It creates standalonehaservices when highavailabilitymode is none, zookeeperhaservices when highavailabilitymode is zookeeper, and factory when highavailabilitymode is factory_ Class is created by using the createcustomhaservices method

Highavailability services utils also provides a static method getjobmanageraddress to get the host name and port of the jobmanager

HighAvailabilityServices

flink-runtime_ 2.11-1.7.1- sources.jar !/org/apache/flink/runtime/highavailability/High AvailabilityServices.java

/**
 * The HighAvailabilityServices give access to all services needed for a highly-available
 * setup. In particular, the services provide access to highly available storage and
 * registries, as well as distributed counters and leader election.
 * 
 * <ul&>
 *     <li&>ResourceManager leader election and leader retrieval</li&>
 *     <li&>JobManager leader election and leader retrieval</li&>
 *     <li&>Persistence for checkpoint metadata</li&>
 *     <li&>Registering the latest completed checkpoint(s)</li&>
 *     <li&>Persistence for the BLOB store</li&>
 *     <li&>Registry that marks a job's status</li&>
 *     <li&>Naming of RPC endpoints</li&>
 * </ul&>
 */
public interface HighAvailabilityServices extends AutoCloseable {

	// ------------------------------------------------------------------------
	//  Constants
	// ------------------------------------------------------------------------

	/**
	 * This UUID should be used when no proper leader election happens, but a simple
	 * pre-configured leader is used. That is for example the case in non-highly-available
	 * standalone setups.
	 */
	UUID DEFAULT_LEADER_ID = new UUID(0, 0);

	/**
	 * This JobID should be used to identify the old JobManager when using the
	 * {@link HighAvailabilityServices}. With the new mode every JobMaster will have a
	 * distinct JobID assigned.
	 */
	JobID DEFAULT_JOB_ID = new JobID(0L, 0L);

	// ------------------------------------------------------------------------
	//  Services
	// ------------------------------------------------------------------------

	/**
	 * Gets the leader retriever for the cluster's resource manager.
	 */
	LeaderRetrievalService getResourceManagerLeaderRetriever();

	/**
	 * Gets the leader retriever for the dispatcher. This leader retrieval service
	 * is not always accessible.
	 */
	LeaderRetrievalService getDispatcherLeaderRetriever();

	/**
	 * Gets the leader retriever for the job JobMaster which is responsible for the given job
	 *
	 * @param jobID The identifier of the job.
	 * @return Leader retrieval service to retrieve the job manager for the given job
	 * @deprecated This method should only be used by the legacy code where the JobManager acts as the master.
	 */
	@Deprecated
	LeaderRetrievalService getJobManagerLeaderRetriever(JobID jobID);

	/**
	 * Gets the leader retriever for the job JobMaster which is responsible for the given job
	 *
	 * @param jobID The identifier of the job.
	 * @param defaultJobManagerAddress JobManager address which will be returned by
	 *                              a static leader retrieval service.
	 * @return Leader retrieval service to retrieve the job manager for the given job
	 */
	LeaderRetrievalService getJobManagerLeaderRetriever(JobID jobID, String defaultJobManagerAddress);

	LeaderRetrievalService getWebMonitorLeaderRetriever();

	/**
	 * Gets the leader election service for the cluster's resource manager.
	 *
	 * @return Leader election service for the resource manager leader election
	 */
	LeaderElectionService getResourceManagerLeaderElectionService();

	/**
	 * Gets the leader election service for the cluster's dispatcher.
	 *
	 * @return Leader election service for the dispatcher leader election
	 */
	LeaderElectionService getDispatcherLeaderElectionService();

	/**
	 * Gets the leader election service for the given job.
	 *
	 * @param jobID The identifier of the job running the election.
	 * @return Leader election service for the job manager leader election
	 */
	LeaderElectionService getJobManagerLeaderElectionService(JobID jobID);

	LeaderElectionService getWebMonitorLeaderElectionService();

	/**
	 * Gets the checkpoint recovery factory for the job manager
	 *
	 * @return Checkpoint recovery factory
	 */
	CheckpointRecoveryFactory getCheckpointRecoveryFactory();

	/**
	 * Gets the submitted job graph store for the job manager
	 *
	 * @return Submitted job graph store
	 * @throws Exception if the submitted job graph store could not be created
	 */
	SubmittedJobGraphStore getSubmittedJobGraphStore() throws Exception;

	/**
	 * Gets the registry that holds information about whether jobs are currently running.
	 *
	 * @return Running job registry to retrieve running jobs
	 */
	RunningJobsRegistry getRunningJobsRegistry() throws Exception;

	/**
	 * Creates the BLOB store in which BLOBs are stored in a highly-available fashion.
	 *
	 * @return Blob store
	 * @throws IOException if the blob store could not be created
	 */
	BlobStore createBlobStore() throws IOException;

	// ------------------------------------------------------------------------
	//  Shutdown and Cleanup
	// ------------------------------------------------------------------------

	/**
	 * Closes the high availability services, releasing all resources.
	 * 
	 * <p&>This method <b&>does not delete or clean up</b&> any data stored in external stores
	 * (file systems, ZooKeeper, etc). Another instance of the high availability
	 * services will be able to recover the job.
	 * 
	 * <p&>If an exception occurs during closing services, this method will attempt to
	 * continue closing other services and report exceptions only after all services
	 * have been attempted to be closed.
	 *
	 * @throws Exception Thrown, if an exception occurred while closing these services.
	 */
	@Override
	void close() throws Exception;

	/**
	 * Closes the high availability services (releasing all resources) and deletes
	 * all data stored by these services in external stores.
	 * 
	 * <p&>After this method was called, the any job or session that was managed by
	 * these high availability services will be unrecoverable.
	 * 
	 * <p&>If an exception occurs during cleanup, this method will attempt to
	 * continue the cleanup and report exceptions only after all cleanup steps have
	 * been attempted.
	 * 
	 * @throws Exception Thrown, if an exception occurred while closing these services
	 *                   or cleaning up data stored by them.
	 */
	void closeAndCleanupAllData() throws Exception;
}

Highavailability services defines the get methods of various services required by highly available

ZooKeeperHaServices

flink-runtime_ 2.11-1.7.1- sources.jar !/org/apache/flink/runtime/highavailability/zookeeper/ ZooKeeperHaServices.java

/**
 * An implementation of the {@link HighAvailabilityServices} using Apache ZooKeeper.
 * The services store data in ZooKeeper's nodes as illustrated by teh following tree structure:
 * 
 * <pre&>
 * /flink
 *      +/cluster_id_1/resource_manager_lock
 *      |            |
 *      |            +/job-id-1/job_manager_lock
 *      |            |         /checkpoints/latest
 *      |            |                     /latest-1
 *      |            |                     /latest-2
 *      |            |
 *      |            +/job-id-2/job_manager_lock
 *      |      
 *      +/cluster_id_2/resource_manager_lock
 *                   |
 *                   +/job-id-1/job_manager_lock
 *                            |/checkpoints/latest
 *                            |            /latest-1
 *                            |/persisted_job_graph
 * </pre&>
 * 
 * <p&>The root path "/flink" is configurable via the option {@link HighAvailabilityOptions#HA_ZOOKEEPER_ROOT}.
 * This makes sure Flink stores its data under specific subtrees in ZooKeeper, for example to
 * accommodate specific permission.
 * 
 * <p&>The "cluster_id" part identifies the data stored for a specific Flink "cluster". 
 * This "cluster" can be either a standalone or containerized Flink cluster, or it can be job
 * on a framework like YARN or Mesos (in a "per-job-cluster" mode).
 * 
 * <p&>In case of a "per-job-cluster" on YARN or Mesos, the cluster-id is generated and configured
 * automatically by the client or dispatcher that submits the Job to YARN or Mesos.
 * 
 * <p&>In the case of a standalone cluster, that cluster-id needs to be configured via
 * {@link HighAvailabilityOptions#HA_CLUSTER_ID}. All nodes with the same cluster id will join the same
 * cluster and participate in the execution of the same set of jobs.
 */
public class ZooKeeperHaServices implements HighAvailabilityServices {

	private static final Logger LOG = LoggerFactory.getLogger(ZooKeeperHaServices.class);

	private static final String RESOURCE_MANAGER_LEADER_PATH = "/resource_manager_lock";

	private static final String DISPATCHER_LEADER_PATH = "/dispatcher_lock";

	private static final String JOB_MANAGER_LEADER_PATH = "/job_manager_lock";

	private static final String REST_SERVER_LEADER_PATH = "/rest_server_lock";

	// ------------------------------------------------------------------------
	
	
	/** The ZooKeeper client to use */
	private final CuratorFramework client;

	/** The executor to run ZooKeeper callbacks on */
	private final Executor executor;

	/** The runtime configuration */
	private final Configuration configuration;

	/** The zookeeper based running jobs registry */
	private final RunningJobsRegistry runningJobsRegistry;

	/** Store for arbitrary blobs */
	private final BlobStoreService blobStoreService;

	public ZooKeeperHaServices(
			CuratorFramework client,
			Executor executor,
			Configuration configuration,
			BlobStoreService blobStoreService) {
		this.client = checkNotNull(client);
		this.executor = checkNotNull(executor);
		this.configuration = checkNotNull(configuration);
		this.runningJobsRegistry = new ZooKeeperRunningJobsRegistry(client, configuration);

		this.blobStoreService = checkNotNull(blobStoreService);
	}

	// ------------------------------------------------------------------------
	//  Services
	// ------------------------------------------------------------------------

	@Override
	public LeaderRetrievalService getResourceManagerLeaderRetriever() {
		return ZooKeeperUtils.createLeaderRetrievalService(client, configuration, RESOURCE_MANAGER_LEADER_PATH);
	}

	@Override
	public LeaderRetrievalService getDispatcherLeaderRetriever() {
		return ZooKeeperUtils.createLeaderRetrievalService(client, configuration, DISPATCHER_LEADER_PATH);
	}

	@Override
	public LeaderRetrievalService getJobManagerLeaderRetriever(JobID jobID) {
		return ZooKeeperUtils.createLeaderRetrievalService(client, configuration, getPathForJobManager(jobID));
	}

	@Override
	public LeaderRetrievalService getJobManagerLeaderRetriever(JobID jobID, String defaultJobManagerAddress) {
		return getJobManagerLeaderRetriever(jobID);
	}

	@Override
	public LeaderRetrievalService getWebMonitorLeaderRetriever() {
		return ZooKeeperUtils.createLeaderRetrievalService(client, configuration, REST_SERVER_LEADER_PATH);
	}

	@Override
	public LeaderElectionService getResourceManagerLeaderElectionService() {
		return ZooKeeperUtils.createLeaderElectionService(client, configuration, RESOURCE_MANAGER_LEADER_PATH);
	}

	@Override
	public LeaderElectionService getDispatcherLeaderElectionService() {
		return ZooKeeperUtils.createLeaderElectionService(client, configuration, DISPATCHER_LEADER_PATH);
	}

	@Override
	public LeaderElectionService getJobManagerLeaderElectionService(JobID jobID) {
		return ZooKeeperUtils.createLeaderElectionService(client, configuration, getPathForJobManager(jobID));
	}

	@Override
	public LeaderElectionService getWebMonitorLeaderElectionService() {
		return ZooKeeperUtils.createLeaderElectionService(client, configuration, REST_SERVER_LEADER_PATH);
	}

	@Override
	public CheckpointRecoveryFactory getCheckpointRecoveryFactory() {
		return new ZooKeeperCheckpointRecoveryFactory(client, configuration, executor);
	}

	@Override
	public SubmittedJobGraphStore getSubmittedJobGraphStore() throws Exception {
		return ZooKeeperUtils.createSubmittedJobGraphs(client, configuration);
	}

	@Override
	public RunningJobsRegistry getRunningJobsRegistry() {
		return runningJobsRegistry;
	}

	@Override
	public BlobStore createBlobStore() throws IOException {
		return blobStoreService;
	}

	// ------------------------------------------------------------------------
	//  Shutdown
	// ------------------------------------------------------------------------

	@Override
	public void close() throws Exception {
		Throwable exception = null;

		try {
			blobStoreService.close();
		} catch (Throwable t) {
			exception = t;
		}

		internalClose();

		if (exception != null) {
			ExceptionUtils.rethrowException(exception, "Could not properly close the ZooKeeperHaServices.");
		}
	}

	@Override
	public void closeAndCleanupAllData() throws Exception {
		LOG.info("Close and clean up all data for ZooKeeperHaServices.");

		Throwable exception = null;

		try {
			blobStoreService.closeAndCleanupAllData();
		} catch (Throwable t) {
			exception = t;
		}

		internalClose();

		if (exception != null) {
			ExceptionUtils.rethrowException(exception, "Could not properly close and clean up all data of ZooKeeperHaServices.");
		}
	}

	/**
	 * Closes components which don't distinguish between close and closeAndCleanupAllData
	 */
	private void internalClose() {
		client.close();
	}

	// ------------------------------------------------------------------------
	//  Utilities
	// ------------------------------------------------------------------------

	private static String getPathForJobManager(final JobID jobID) {
		return "/" + jobID + JOB_MANAGER_LEADER_PATH;
	}
}

Zookeeperhaservices implements the high availability services interface, which creates the required services through various create methods of zookeeperutils, such as ZooKeeperUtils.createLeaderRetrievalService 、 ZooKeeperUtils.createLeaderElectionService 、 ZooKeeperUtils.createSubmittedJobGraphs

JobClient.submitJob

flink-runtime_ 2.11-1.7.1- sources.jar !/org/apache/flink/runtime/client/ JobClient.java

public class JobClient {

	private static final Logger LOG = LoggerFactory.getLogger(JobClient.class);

	//......

	/**
	 * Submits a job to a Flink cluster (non-blocking) and returns a JobListeningContext which can be
	 * passed to {@code awaitJobResult} to get the result of the submission.
	 * @return JobListeningContext which may be used to retrieve the JobExecutionResult via
	 * 			{@code awaitJobResult(JobListeningContext context)}.
	 */
	public static JobListeningContext submitJob(
			ActorSystem actorSystem,
			Configuration config,
			HighAvailabilityServices highAvailabilityServices,
			JobGraph jobGraph,
			FiniteDuration timeout,
			boolean sysoutLogUpdates,
			ClassLoader classLoader) {

		checkNotNull(actorSystem, "The actorSystem must not be null.");
		checkNotNull(highAvailabilityServices, "The high availability services must not be null.");
		checkNotNull(jobGraph, "The jobGraph must not be null.");
		checkNotNull(timeout, "The timeout must not be null.");

		// for this job, we create a proxy JobClientActor that deals with all communication with
		// the JobManager. It forwards the job submission, checks the success/failure responses, logs
		// update messages, watches for disconnect between client and JobManager, ...

		Props jobClientActorProps = JobSubmissionClientActor.createActorProps(
			highAvailabilityServices.getJobManagerLeaderRetriever(HighAvailabilityServices.DEFAULT_JOB_ID),
			timeout,
			sysoutLogUpdates,
			config);

		ActorRef jobClientActor = actorSystem.actorOf(jobClientActorProps);

		Future<Object&> submissionFuture = Patterns.ask(
				jobClientActor,
				new JobClientMessages.SubmitJobAndWait(jobGraph),
				new Timeout(AkkaUtils.INF_TIMEOUT()));

		return new JobListeningContext(
			jobGraph.getJobID(),
			submissionFuture,
			jobClientActor,
			timeout,
			classLoader,
			highAvailabilityServices);
	}

	//......
}

Like JobClient.submitJob The method is high AvailabilityServices.getJobManagerLeaderRetriever Method to obtain the address of the jobmanagerleader, which is used to submit the job

Summary

Highavailabilitymode has three enumerations: none, zookeeper and factory_ Class; these enumerations have a property haactive to indicate whether highavailability is supported; highavailability options defines the prefix high- availability.zookeeper Configuration item for

Highavailability services utils provides static methods for creating highavailability services. These methods include createavailableorembeddedservices, createhighavailabilityservices, and createcustomhaservices. The createavailableorembeddedservices method is mainly used for flinkminicluster, and the createhighavailabilityservices method is mainly used for clusterentrypoint When the high availability mode is none, standalone has services is created, zookeeper has services is created for zookeeper in the high availability mode, and factory is created in the high availability mode_ Class is created by using the createcustomhaservices method

Highavailability services defines the get methods of all kinds of services required by high availability; zookeeperhaservices implements the interface of highavailability services, which creates the required services through all kinds of create methods of zookeeperutils, such as ZooKeeperUtils.createLeaderRetrievalService 、 ZooKeeperUtils.createLeaderElectionService 、 ZooKeeperUtils.createSubmitte Djobgraphs; image JobClient.submitJob The method is high AvailabilityServices.getJobManagerLeaderRetriever Method to obtain the address of the jobmanagerleader, which is used to submit the job

doc

JobManager High Availability (HA)

Comparison of service registration models: Consult vs zookeeper vs etcd vs Eureka

Zookeeper is based on the simplified version of Zab of Paxos, etcd is based on raft algorithm, and consumer is also based on raft algorithm. Etcd and consumer, as rising stars, did not abandon themselves because they already had zookeeper, but adopted a more direct raft algorithm.

The number one goal of raft algorithm is to be easy to understand, which can be seen from the title of the paper. Of course, raft enhances comprehensibility, which is no less than Paxos in terms of performance, reliability and availability.

Raft more understandable than Paxos and also provides a better foundation for building practical systems

   in order to achieve the goal of being easy to understand, raft has made a lot of efforts, the most important of which are two things:

problem decomposition

State simplification

   problem decomposition is to divide the complex problem of “node consistency in replication set” into several subproblems that can be explained, understood and solved independently. In raft, subproblems include leader selection, log replication, safety and membership changes. The better understanding of state simplification is to make some restrictions on the algorithm, reduce the number of States to be considered, and make the algorithm clearer and less uncertain (for example, to ensure that the newly elected leader will contain all the commented log entries)

Raft implements consensus by first electing a distinguished leader, then giving the leader complete responsibility for managing the replicated log. The leader accepts log entries from clients, replicates them on other servers, and tells servers when it is safe to apply log entries to their state machines. A leader can fail or become disconnected from the other servers, in which case a new leader is elected.

   the above quotation highly summarizes the working principle of raft protocol: raft will select the leader first, and the leader is fully responsible for the management of replicated log. The leader is responsible for accepting all client update requests, copying them to the follower node, and executing them when “safe”. If the leader fails, followers will re elect a new leader.

   this involves two new subproblems of raft: Leader Selection and log replication

leader election

log replication

Here is a comparison of the following features of service discovery products that are often used. First of all, let’s look at the following conclusions:

Health check of service

Euraka needs to explicitly configure health check support when it is used; zookeeper and etcd are not healthy when they lose the connection with the service process, while consult is more detailed, such as whether the memory has been used by 90% and whether the file system is running out of space.

Multi data center support

Consul completes the synchronization across data centers through Wan’s gossip protocol, and other products need additional development work

KV storage service

In addition to Eureka, several other products can support K-V storage services externally, so we will talk about the important reasons why these products pursue high consistency later. And providing storage services can also be better transformed into dynamic configuration services.

The choice of cap theory in product design

Eureka’s typical AP is more suitable for service discovery in distributed scenarios. Service discovery scenarios have higher availability priority and consistency is not particularly fatal. Secondly, CA type scenario consul can also provide high availability and ensure consistency of K-V store service. Zookeeper and etcd are CP types, which sacrifice availability and have little advantage in service discovery scenarios

Multilingual capability and access protocol for external services

Zookeeper’s cross language support is weak, and other models support http11 to provide access. Euraka generally provides access support for multilingual clients through sidecar. Etcd also provides grpc support. In addition to the standard rest Service API, consul also provides DNS support.

Watch support (clients observe changes in service providers)

Zookeeper supports server-side push changes, and Eureka 2.0 (under development) also plans to support it. Eureka 1, consul and etcd all realize change perception through long polling

Monitoring of self cluster

In addition to zookeeper, other models support metrics by default. Operators can collect and alarm these metrics information to achieve the purpose of monitoring

Safety

Consul and zookeeper support ACL, and consul and etcd support secure channel HTTPS

Integration of spring cloud

At present, there are corresponding boot starters, which provide integration capabilities.

In general, the functions of consul and the support of spring cloud for its integration are relatively perfect, and the complexity of operation and maintenance is relatively simple (there is no detailed discussion). The design of Eureka is more in line with the scene, but it needs continuous improvement.

Etcd and zookeeper provide very similar capabilities, and their positions in the software ecosystem are almost the same, and they can replace each other.

They are universal consistent meta information storage

Both provide a watch mechanism for change notification and distribution

They are also used by distributed systems as shared information storage

In addition to the differences in implementation details, language, consistency and protocol, the biggest difference lies in the surrounding ecosystem.

Zookeeper is written in Java under Apache and provides RPC interface. It was first hatched from Hadoop project and widely used in distributed system (Hadoop, Solr, Kafka, mesos, etc.).

Etcd is an open source product of coreos company, which is relatively new. With its easy-to-use rest interface and active community, etcd has captured a group of users and has been used in some new clusters (such as kubernetes).

Although V3 is changed to binary RPC interface for performance, its usability is better than zookeeper.

While the goal of consul is more specific. Etcd and zookeeper provide distributed consistent storage capacity. Specific business scenarios need to be implemented by users themselves, such as service discovery and configuration change.

Consul aims at service discovery and configuration change, with kV storage.

In the software ecology, the more abstract the components are, the wider the scope of application is, but there must be some shortcomings in meeting the requirements of specific business scenarios.

——————-Message middleware rabbitmq

Message middleware rabbitmq (01)

Message middleware rabbitmq (02)
0

Message middleware rabbitmq (03)
0

Message middleware rabbitmq (04)
0

Message middleware rabbitmq (05)

Message middleware rabbitmq (06)
0

Message middleware rabbitmq (07)

———————- cloud computing————————————-

Cloud computing (1) — docker’s past and present life

Cloud computing (2) — Architecture

Cloud computing (3) — container applications

Cloud computing (4) — lamp

Cloud computing (5) — dockerfile
cloud computing (6) — harbor

Add wechat to the wechat communication group of wechat service, and note that wechat service enters the group for communication

pay attention to official account soft Zhang Sanfeng

this article is shared by WeChat official account – soft Zhang Sanfeng (aguzhangsanfeng).
In case of infringement, please contact [email protected] Delete.
This article participates in the “OSC source creation program”. You are welcome to join and share.

Using composer to install tp5.1, Zsh: no matches found: 5.1*

Using composer to install tp5.1, Zsh: no matches found: 5.1*

blog description

The information involved in this article comes from Internet collation and personal summary, which means personal learning and experience summary. If there is any infringement, please contact me to delete it. Thank you!

question prompt

zsh: no matches found: 5.1.*

problem solving

In the past, it can be executed, but now the problem appears when creating a new one

The solution is to determine the version directly without using the ambiguous command of this version

composer create-project topthink/think=5.1.31 sight

thank you

Omnipotent network

And industrious self

SAP work process Memory allocate

Memory allocation sequence to dialog work processes in SAP

What is the memory allocation sequence to dialog work processes in SAP?

When does a work process go to PRIV mode?

How to avoid or minimize work process going to PRIV mode ?

What are the SAP parameters used to define initial roll area, extended memory, heap memory, roll area ?

Memory allocation sequence to dialog work processes in SAP :

1. Initially , a defined roll area is used. This roll area is defined by the SAP parameter ztta/roll_first.

Usually ztta/roll_first is set to 1 in SAP so that only necessary amount is allocated to roll memory.

If the memory from the initial roll area( i.e. ztta/roll_first) is not sufficient for the user context then comes extended memory.

Extended memory is used until the extended memory is full or until the user quota is reached

Extended memory is defined by the SAP parameter em/initial_size_MB and the user quota for dialog work process is defined by the parameter ztta/roll_extension_dia.

If this memory is also not sufficient, then

The rest of the roll area is used. This roll area is defined by SAP parameter ztta/roll_area.

Once this is also fully occupied then

The system is forced to use local heap memory (Private memory). Then the work process goes into PRIV mode

Heap memory is available until one of the following occurs :

either the limit of the heap memory for dialog work processes is reached (abap/heap_area_dia) or the entire heap memory of all work processes(abap/heap_area_total) for an application server reaches its limit.

Operating system limit for allocation of memory

The swap space in the host system is used up or the upper limit of the operating system address space is reached.

The memory allocation strategy for dialog work processes, aims to prevent work processes from allocating R/3 heap memory and thus entering PRIV mode.

When a work process enters PRIV mode, it remains connected to the user until the user ends the transaction. Most of the time, we should try to avoid the situation of work process going into PRIV mode for better performance of the SAP system. This can be done by optimally defining abap/heap_area_total parameter.

Memory allocation sequence to non dialog work processes in SAP

What is the memory allocation sequence to non dialog work processes (background, update, en-queue and spool work processes in SAP?

What are the SAP parameters used to define initial roll area, extended memory, heap memory, roll area ?

What is the memory allocation sequence to non dialog work processes in Windows NT?

Memory allocation sequence to dialog work processes is same in SAP for all the platforms.

However memory allocation sequence to non-dialog work processes is bit different based on Platform. In Windows NT, memory allocation sequence for non-dialog work processes is same as that of dialog work process memory allocation sequence in other platforms.

Memory allocation sequence to non dialog work processes in SAP as below (except in windows NT) :

Initially memory is assigned from the Roll memory. Roll memory is defined by SAP parameter ztta/roll_area and it is assigned until it is completely used up.

If the roll memory is full then

Heap memory is allocated to the non-dialog work process. Heap memory is available until one of the following occurs :

Either the limit of the heap memory for non-dialog work processes is reached (defined by the SAP parameter abap/heap_area_nondia) or the entire heap memory of all work processes of an SAP application server reaches its limit which is defined by parameter abap/heap_area_total.

Operating system limits of allocating memory

The swap space in the host system is completely used up. However this situation should not occur often which results in severe performance issues.

Please check swap space requirements for various platforms and please define swap space optimally to avoid this issue.

If all the above mentioned heap memory is completely used up then a non-dialog work process can use the SAP extended memory defined by SAP parameter em/initial_size_MB.