Memory Tips: Physical and Virtual Memory Management, Global Sections, System Parameters

来源:互联网 发布:武汉网络装修 编辑:程序博客网 时间:2024/04/30 14:43
Memory Tips: Physical and Virtual Memory Management, Global Sections, System Parameters
http://64.223.189.234/node/461
Tagged: OpenVMS  •  OpenVMS Alpha  •  OpenVMS I64  •  OpenVMS VAX  •  Tips

The following discusses OpenVMS memory management, including virtual and physical memory, the pagefile, and global sections.

On OpenVMS VAX, on all versions, the VIRTUALPAGECNT system parameter provides the upper limit on the quantity of virtual address space available to a single process. Per the VAX architecture, this is four gigabytes, of which two gigabytes are reserved for system space and two gigabytes are reserved for the process control region and for the application code.

The address space is determined by the upper two bits of the 32-bit address value. Virtual addresses with the bits 00 in the highest bits are known as the process P0 address space, bits 01 as the process control region — home of various RMS and I/O caches, of DCL and its symbol table, and other process-level structures. Bits 10 as S0 space, the classic system address space. Bits 11 are known as S1 space, which functions akin to an attic for all that doesn't fit into S0 space on those VAX boxes with Extended Virtual Addressing (XVA) capabilities.

VAX systems with the XVA architectural extension also have the Extended Physical Addressing (XPA) extension. All known VAX implementations have XVA and XPA combined.

XVA adds S1 space by extending the size of the virtual page number field in the page tables from 21 to 22 bits, while XPA extends the physical addressing from one gigabyte to four; from 30 bits of physical addressing to 32 bits of physical addressing. XPA and XVA capabilities are available on the VAX 6000 Model 600 series, the VAX 7000 series, and the VAX 10000 series. (Certain of the OpenVMS manuals do not reflect that all members of the VAX 7000 and VAX 10000 offered XPA and XVA. These sections of the documentation were likely simply not updated as members of those series were added.)

Put another way, the largest address space available on OpenVMS VAX is typically one gigabyte, with another gigabyte if you're sneaky, and access to some of the remaining quantity if you can locate code or data in system space.

One of the biggest limits with OpenVMS VAX: all of OpenVMS VAX itself, and the data structures and all of the page tables and paged and non-paged pool must all fit into one or (on those few VAX processors with XVA support) two gigabytes. This means that configurations requiring large virtual address space, large numbers of processes, large non-paged pool and/or large working sets can be limited by the size of system space. This is an architectural limit of the VAX architecture.

All OpenVMS Alpha releases prior to V7.0 functions similarly to OpenVMS VAX.

With OpenVMS Alpha V7.0 and later and with OpenVMS I64 on Integrity servers and Intel Itanium processors, 64-bit address space is in use. This means that existing applications can operate as they have within 32-bit address space, and new and modified applications can choose to use and extend into 64-bit address space.

What was once a 32-bit virtual address value is now 64-bit, allowing for a rather larger address space. The same conceptual subsetting of VAX virtual address space exists, where the uppermost (or negative values) is system space, and the lowest (or positive values) range is process address space. The upper two gigabytes are S0 and S1 space, and the lowest two gigabytes are P0 and P1 space. In the gigantic gap between those two two gigabyte ranges lives S2 space in the upper half of the range, and P2 space in the lower half. S2 is also known as 64-bit system space, and P2 is referred to as 64-bit process space.

Applications must be modified to use 64-bit addressing, and there are system services and RTL calls related to this. Existing pointers must be increased in size. And as has been the case for eons, you still need enough PGFLQUOTA or enough backing storage file space to back that portion of process address space that needs it; there needs to be enough storage to page out the address space. Memory that is marked as memory-resident need not have backing storage.

There are other system parameters involved in memory management, including GBLSECTIONS, GBLPAGES and GBLPAGFIL.

GBLSECTIONS controls the total number of global sections that can exist on an OpenVMS system. Installed images and global sections both consume these. HoffmanLabs prefers to increase the value to keep between 25% and 50% of the total number available as free. The cost of having this value set too high is negligible.

GBLPAGES controls the total number of pages that can be configured across all global sections, whether the sections and the pages within are active and running or whether there is a deletion pending. That you can have multiple sections present means that this parameter value is typically set rather higher than might appear necessary, particularly if applications are creating and replacing global sections. Each 128 entries requires 4 bytes of storage. Somewhere between 20% free and as much as 75% free — where there are large contiguous sections and/or sections that are seeing frequent deletion and replacement — can be required.

GBLPAGES sizes the Global Page Table (GPT) data structure. With OpenVMS V7.3, a decision was implemented within OpenVMS to size the GBLPAGES value to three-quarters of available physical memory; to pre-allocate room for a large GPT. This has the cost of three memory pages for every four million pages of system memory (4,194,304 pages) that can be allocated within the system. With the GPT located in S2 space and created as demand-zero memory, there is no cost to this construction until the memory is referenced. And once the memory is referenced, the cost is still very low. This OpenVMS V7.3 parameter change was a trade-off, and intended to reduce system management effort.

To determine the free and peak values of GBLPAGES on your OpenVMS system, you can use the following sequence:

$ @SYS$UPDATE:AUTOGEN SAVPARAMS SAVPARAMS$ SEARCH SYS$SYSTEM:AGEN$FEEDBACK.DAT GBLPAGES

The current and peak values will be displayed.

Related values can be acquired from f$getsyi using the itemcodes FREE_GBLPAGES and CONTIG_GBLPAGES, as well as FREE_GBLSECTS. The status of the pagefile can be determined with the SHOW MEMORY command, or with the f$getsyi lexical values PAGEFILE_FREE and PAGEFILE_PAGE.

GBLPAGFIL is akin to the process PGFLQUOTA pagefile quota; the GBLPAGFIL parameter limits the total backing storage from the pagefile(s) available to the sections. GBLPAGFIL is a dynamic parameter on V7.1 and later.

With OpenVMS Alpha V7.0, the VIRTUALPAGECNT system parameter was set to its maximal value; to %XFFFFFFFF. This system parameter is now obsolete.

With OpenVMS Alpha V7.2, the lock management data structures moved into S2 space, freeing up substantial space in non-paged pool. Non-paged pool is in 32-bit address space.

With OpenVMS Alpha V7.3, the XFC extended file system caches moved what was once the virtual memory required for the VIOC caches up into the XFC caches in S2 space, freeing up space in S0 and S1 space. The GBLPAGES change was also implemented, largely rendering this parameter obsolete.

Classically, the application INSVIRMEM insufficient virtual memory error was triggered by either VIRTUALPAGECNT or by the process PGFLQUOTA pagefile quota. With OpenVMS Alpha V7.0 and later and OpenVMS I64, INSVIRMEM is usually a pointer to an insufficient PGFLQUOTA setting.

For additional details on VAX physical and virtual addressing, see VAX Physical and Virtual Addressing; XPA and XVA.

 
原创粉丝点击