This assumes that your CD-ROM device is called /dev/cdrom and that you want to mount it to /mnt/cdrom. Refer to the mount man page for more specific information or type mount -h at the command line for help information.
After mounting, you can use the cd command to navigate the newly available filesystem through the mountpoint you just made.
Data collection jobs run automatically according to the parameters you have selected, and will appear in the Job Controller as a scheduled job. The system can also be configured to generate advisories, a brief text message describing an actual or potential problem and the suggested corrective action.
Reports and SRM Summaries viewed through the console are generated from the data collected and stored on the SRM Server. Schedule your first reports to run after your first data collection jobs have completed. Note that data collection and Report Generation jobs are scheduled independently, at different times and intervals. Thus, you can, for example, collect data nightly, hourly, or on-demand, but generate reports only once per week if desired.
Greater disk throughput could be achieved by rewriting the disk drivers to chain together kernel buffers. This would allow contiguous disk blocks to be read in a single disk transaction. Many disks used with UNIX systems contain either 32 or 48 512 byte sectors per track. Each track holds exactly two or three 8192 byte filesystem blocks, or four or six 4096 byte filesystem blocks. The inability to use contigu- ous disk blocks effectively limits the performance on these disks to less than 50% of the available band- width. If the next block for a file cannot be laid out contiguously, then the minimum spacing to the next allocatable block on any platter is between a sixth and a half a revolution. The implication of this is that the best possible layout without contiguous blocks uses only half of the bandwidth of any giv en track. If each track contains an odd number of sectors, then it is possible to resolve the rotational delay to any number of sectors by finding a block that begins at the desired rotational position on another track. The reason that block chaining has not been implemented is because it would require rewriting all the disk drivers in the system, and the current throughput rates are already limited by the speed of the available processors.
– Ext2fs does not use fragments; it performs its allocations in smaller units. The default block size on ext2fs is 1Kb, although 2Kb and 4Kb blocks are also supported.
– Ext2fs uses allocation policies designed to place logically adjacent blocks of a file into physically adjacent blocks on disk, so that it can submit an I/O request for several disk
• If you have any usercats on the volume, a full volume copy isn’t a good choice. For zFS file systems that are using a usercat on the volume, this is not a recommended choice.
• Data set copy:
• Logical data set copy: can rename the filesystem on the copy operation, and depending on your catalog structure may be immediately accessible. A popular choice that works well in many situations. (z/OSMF’s Deployment Manager task uses this method.)
8.1 Conversion from UNIX permissions to NT ACLs
For those who decide to move from a single file level security model to a merged environment the first question usually asked is ’Will I lose all my UNIX perms on those areas where I now need NT ACLs?’ No one wants to ask their users to go in and re-set file permissions to take advantage of the new functionality. We realized early on in the design process that the need to fake up an ACL from a set of UNIX perms would be an important one, particularly during migration. Since we had many Windows customers using UNIXfile security we toyed with a one-time conversion tool which would read the UNIX perms on every file and set an ACL, using a mapping file which the administrator would set up in advance. This idea was popular during the customer discussions but no one wanted to set up the mapping file if they could help it. The solution we opted for was to fake up an ACL when a Windows user in a freshly converted NTFS tree viewed the security on a file. The faked up ACL would then be replaced by a real one when the user clicked OK, even if no changes where made to the displayed ACL information. Thus, over time, an NTFS qtree would migrate to NT ACLs. For phase 1, the UNIX permissions would still be stored for the file (and recomputed if the ACL changed) for NFS access. A phase 2 implementation would not require the UNIX perms since UID to SID mapping would allow NFS access to be evaluated directly against the ACL itself. 8.2 User Authentication - NIS or NT Domain?
Access to migrated and purged data is the same as access to locally stored data, except for the possibility of a slight delay during the retrieval of data that has been purged from the filesystem.
Information that can be provided from the file’s metadata, or from data contained in the locally retained stub file, is available without triggering a retrieval from the back-end system. This enables many queries to be completed at local disk speed, even for large files which have been migrated and purged.
22
Toleration with zFS R10, Step 2
• After APAR OA25026 is active on each z/OS V1R10 system, specify the
sysplex_admin_level=2 configuration option in the IOEPRMxx file(s). Make this level active on all z/OS V1R10 systems through another rolling IPL. This is the toleration function for zFS on z/OS V1R11 and R12. (The default for
process only affects the size of its time-slice within one scheduling epoch[18], so adjusting priorities is not enough.
Fortunately, most operating systems include heuris- tics to schedule I/O-bound processes whenever they are runnable. Tsafrir, et al, exploited this scheduler behav- ior to monopolize the CPU[24]. Our attack exploits this feature by launching a sub-process to perform all the file-system manipulations needed to prepare for the next race, effectively laundering the main attacker’s scheduling priority by dumping all the dirty work on its child. The main attacker thread sleeps while the sub-process is working. The main attacker process spends almost all of its time sleeping – either waiting on its worker sub-process or the victim – and hence looks like an I/O-bound process and gets preferential scheduling. Although the victim also spends most of its time suspended, it apparently does not get the same priority boost. This is probably because it is suspended as a result of a signal instead of I/O or voluntarily sleeping.
Please note, that local scan mode requires more accurate adjustment of user rights. Daemon must have read access to each file specified. If you run Daemon on mail server with Cure and Delete options enabled, you must allow write access either.
Usage of Daemon with mail servers requires special attention because mail filters usually act on behalf of the mail system and use its rights. In local scan mode mail filter usually creates a file with the message received from the mail system and provides Daemon a path to it. At this point you must carefully specify access rights to the directory where filters create appropriate files. We recommend either to include user whose rights are used by Daemon into the mail subsystem group, or to run Daemon with the rights of the mail system user.
The z/OS UNIXfilesystem …
• The z/OS hierarchical filesystem is actually a bit more involved than the previous slide shows
• The sysplex shared filesystem environment needs to support multiple concurrent z/OS releases and even multiple concurrent service levels for different LPARs in a single filesystem hierarchy
– data blocks from a file are all placed in the same cylinder group – files in same directory are placed in the same cylinder group – i-node for file placed in same cylinder group as fil[r]
zFS will also move ownership when the owning system is shutdown, or an
abnormal outage occurs on the zFS owning system. Thus the z/OS UNIX owner and the zFS owner can be two entirely different systems depending on the sequence of events. This is normal, and for a sysplex that is using the zFS sysplex=filesys support on all sysplex members this should have no negative consequences.
• Supporting large amounts of data.
• Allowing concurrent reads and writes from multiple nodes; key in parallel processing.
GPFS uses a sophisticated token-management system to provide data consistency while allowing simultaneous shared read and write access, with multiple independent paths to the same file by the same name from anywhere in the system. Even when nodes are down or hardware resource demands are high, GPFS can find an available path to the filesystem data.
feature provides for privileged programs which may use files inaccessible to other users. For example, a program may keep an accounting file which should neither be read nor changed except by the program itself. If the set-user- identification bit is on for the program, it may access the file although this access might be forbidden to other pro- grams invoked by the given program’s user. Since the actual user ID of the invoker of any program is always available, set-user- ID programs may take any measures desired to satisfy themselves as to their invoker’s creden- tials. This mechanism is used to allow users to execute the carefully written commands which call privileged system entries. For example, there is a system entry invocable only by the “super-user” (below) which creates an empty direc- tory. As indicated above, directories are expected to have entries for “.” and “..”. The command which creates a direc- tory is owned by the super user and has the set-user- ID bit set. After it checks its invoker’s authorization to create the specified directory, it creates it and makes the entries for “.”
• It creates file1.txt and allow us to insert content for this file.
• After inserting content you can use ctrl+c to exit the file.
$cat file.txt > newfile.txt
• Read the contents of file.txt and write them to newfile.txt, overwriting anything newfile.txt previously contained. If newfile.txt does not exist, it will be created.
To create a new file or completely rewrite an old one, there is a create system call that creates the given file if it does not exist, or truncates it to zero length if it does exist; create also opens the new file for writing and, like open, returns a file descriptor.
The filesystem maintains no locks visible to the user, nor is there any restriction on the number of users who may have a file open for reading or writing. Although it is possible for the contents of a file to become scrambled when two users write on it simultaneously, in practice difficulties do not arise. We take the view that locks are neither necessary nor sufficient, in our environment, to prevent interference between users of the same file. They are unnecessary because we are not faced with large, single-file data bases maintained by independent processes. They are insufficient because locks in the ordinary sense, whereby one user is prevented from writing on a file that another user is reading, cannot prevent confusion when, for example, both users are editing a file with an editor that makes a copy of the file being edited.
for each mounted (and not filtered) filesystem name of the mounted block device,
name of the filesystem type (eg. ext3, ffs, zfs) full qualified path name of the mount point device type: one of sg fs unknown, sg fs regular, sg fs special, sg fs loopback, sg fs remote or any combination
All other product and company names and marks mentioned in this document are the property of their respective owners and are mentioned for identification purposes only.
THIS SOFTWARE MAY BE AVAILABLE ON MULTIPLE OPERATING SYSTEMS. HOWEVER, NOT ALL OPERATING SYSTEM PLATFORMS FOR A SPECIFIC SOFTWARE VERSION ARE RELEASED AT THE SAME TIME. SEE THE README FILE FOR THE AVAILABILITY OF THIS SOFTWARE VERSION ON A SPECIFIC OPERATING SYSTEM PLATFORM.
— in a unified set of kernel-space data structures — the lock types needed for multiprotocol file-open and locking support. Since it is integrated into the Data ONTAP kernel, the lock manager is able to validate that reads and writes of files and directories do not violate locks. SecureShare enforces CIFS locks and file-open semantics at the system level. By contrast, in a UNIX- based CIFS implementation such as Samba [9], Win- dows file-open information is maintained by a module that is disjoint from — and has no interaction with — the module that implements UNIX and NLM locking functionality. Moreover, because data access functions under UNIX do not normally check for lock conflicts, there is nothing to stop local or NFS-based UNIX users and applications from accessing, corrupting, or even removing “locked” Windows files and data. Thus, it is possible in such an environment for UNIX users or NFS clients to write, remove, or move files that CIFS- based Windows applications are holding open and ac- tively accessing.