1. What specifically is the proposal that we are reviewing? This project is a component of Project Clearview, as covered in the PSARC/2005/132 umbrella case (Clearview: Network Interface Coherence). In short: this project rearchitects the existing Solaris IP Multipathing (IPMP) technology so that it can work transparently with all IP-based applications, allowing it to be deployed much more widely by customers. As a side effect, it also enables core technologies such as DHCP to work seamlessly with IPMP. It also provides an improved administrative model and new diagnostic facilities which should greatly ease troubleshooting in IPMP environments -- and reduce our support costs. Finally, it allows several thousand lines of extremely subtle and far-flung bits of code to be removed from the kernel's networking stack. A complete list of the problems the project aims to address can be found in section 1.2 of the design document. - What is the technical content of the project? This project rearchitects the existing IP Multipathing technology that was originally presented in 1999/225 and is documented in the Solaris "IP Services" guide. The core functionality and feature set remains, but the internals have changed extensively. Section 3 of the the design document provides a high-level view of the architectural changes, and compares and contrasts them to the original. Most of the architectural changes result from the introduction of the "IPMP IP interface", which represents an IPMP group as a single IP interface, rather than as a set of IP interfaces (as it was before). The basic behavior of the IPMP IP interface is described in sections 3.1-3.10 of the design document, along with several examples. As before, the core IPMP implementation is split between the kernel's IP module and the in.mpathd(1M) daemon. The daemon remains responsible for monitoring the health of each IP interface in the IPMP group, and the kernel remains responsible for performing the actual multipathing across the available IP interfaces in the group. However, some of the specific responsibilities have changed, as discussed in section 3.12 of the design document. Also as before, through the ip_rcm module, the Solaris RCM framework works with in.mpathd to ensure network connections are unaffected as networking hardware is removed from or configured into the system; see section 3.6 of the design document for more information. This project also introduces the ipmpstat utility, which allows a wide range of IPMP subsystem issues to be quickly diagnosed. Is is described in section 4.2 of the design document. - Is this a new product, or a change to a pre-existing one? If it is a change, would you consider it a "major", "minor", or "micro" change? See the Release Taxonomy in: This is a change to a pre-existing product. Despite the amount of proposed change, we have strived to provide backward-compatibility at both the administrative and programmatic levels, and thus feel it qualifies for "micro" binding. That said, there are certain changes that will surprise unsuspecting administrators using IPMP, such as the new behavior when data addresses are added to an IPMP group (see section 4.1.1 of the design document). We welcome the ARC's input. - If your project is an evolution of a previous project, what changed from one version to another? The design document compares and contrasts the existing and proposed model throughout. The most visible changes are covered in roughly priority-order in section 3 of the design document. More detailed changes to the administrative and programmatic models are covered in sections 4 and 5 (respectively) of the design document. - What is the motivation for it, in general as well as specific terms? (Note that not everyone on the ARC will be an expert in the area.) See section 1 of the design document for an overview of IPMP and the motivation behind the changes. - What are the expected benefits for Sun? See question 1. - By what criteria will you judge its success? All IP-based applications are able to work transparently with IPMP. 2. Describe how your project changes the user experience, upon installation and during normal operation. There are no changes to installation, nor any required configuration changes. On systems using IPMP, administrators will see an additional IP interface for each IPMP group (e.g., through utilities such as ifconfig, netstat, arp, and route). More generally, on systems using IPMP, networking applications will discover (and thus interact with) IPMP IP interfaces rather than the IP interfaces that comprise the group (as they did before), which may be visible to the end-user (e.g., the network monitor in the GNOME panel will show the aggregate load for the IPMP IP interface, rather than individual loads for the IP interfaces that comprise the group). Note that applications explicitly configured to operate on a specific IP interface in an IPMP group continue to work as before -- with the same extreme caveat that those applications will not remain available if the IP interface fails. Administrators can optionally change those applications to use the IPMP IP interface and get high-availability. 3. What is its plan? Are there multiple delivery phases? We plan a single integration, though separable improvements that could in principle be part of this project may be delivered into Nevada early when advantageous (e.g., see PSARC/2006/289). - Has a design review been done? A high-level design review was completed via opensolaris.org, using the provided design document. - What is its current status? It is currently in active open development, and BFU archives are undergoing alpha testing with customers. 4. Are there related projects in Sun? As discussed in the Clearview one-pager, this work is part of the broader Clearview objective to rationalize, unify and enhance the way network interfaces are handled in Solaris. While this is directly realized by remodeling IPMP groups as IP interfaces, the Clearview tenets are present in other design decisions, such as the ability to give IPMP IP interfaces administratively-chosen names. Also, as discussed in section 4.11 of the design document, Clearview's IP-level observability component will enable packet monitoring of an IPMP group as a whole. Section 1 of the design document briefly covers IPMP in relation to other network availability technologies (802.3ad, Sun Trunking, CGTP, SCTP, and OSPF-MP), and section 3.19 covers IPMP's relation to them. Sun Cluster uses IPMP to provide high-availability between nodes of a cluster, as summarized in section 3.17 of the design document. Sun Cluster also has a number of contracts that are tied to aspects of the existing IPMP architecture. We are working with them to ensure that their needs continue to be met, but have not yet reached closure. Since the IPMP implementation is part of the kernel's IP module and closely tied to its current architecture, projects which make significant changes to the IP module will need careful coordination. For instance, the IPMP code depends extensively on the existing IPSQ synchronization framework, and makes use of IP's IRE cache mechanism to latch an outbound interface for a given destination. 5. How is the project delivered into the system? Through existing packages. As discussed in section 4.3.4 of the design document, ipmpstat and in.mpathd are both used in early boot, and will be installed in /sbin through SUNWcsr. Both ipmpstat and in.mpathd require libipmp, which will in turn be installed in /lib and delivered through SUNWcslr (with lint libraries delivered through SUNWarcr). The new ipmp_admin.h header file (see section 5.21.2) will be installed in /usr/include and delivered through SUNWhea. A number of symlinks are also planned: * Per section 4.3.4, a /usr/sbin/ipmpstat symlink will be delivered through SUNWcsu. * For backward compatibility, a /usr/lib/inet/in.mpathd symlink will be delivered through SUNWcsu. * For build environment consistency, libipmp symlinks will be installed in /lib and delivered through SUNWcsl and SUNWarc. Finally, as per section 5.20, the existing vni driver will be enhanced and renamed to dlpistub and still delivered through SUNWckr. As part of those enhancements, dlpistub's attach routine will trigger the creation of an additional /dev/ipmpstub device node. 6. Describe the project's hardware platform dependencies. None. However, as before, Solaris network drivers that do not support link up/down notification cannot take advantage of IPMP's link-based failure detection feature. As per section 4.2.3 of the design document, these drivers are now easily identifiable. 7. System administration - How will the project's deliverables be installed and (re)configured? Using the standard Solaris package utilities. - How will the project's deliverables be uninstalled? The project is part of the base system and cannot be uninstalled. - Does it use inetd to start itself? No. - Does it need installation within any global system tables? No. - Does it use a naming service such as NIS, NIS+ or LDAP? No, though ipmpstat maps IP addresses to hostnames when requested; see section 4.2 of the design document. - What are its on-going maintenance requirements (e.g. Keeping global tables up to date, trimming files)? None. - How does this project's administrative mechanisms fit into Sun's system administration strategies? E.g., how does it fit under the Solaris Management Console (SMC) and Web-Based Enterprise Management (WBEM), how does it make use of roles, authorizations and rights profiles? Additionally, how does it provide for administrative audit in support of the Solaris BSM configuration? The configuration of an IPMP IP interface is done in the same manner as any other IP interface -- through ifconfig(1M). Any facilities for managing current or persistent IP interface configuration will continue to work as before. However, as before, IPMP-specific configuration (e.g., placing an IP interface into a group) must be done through ifconfig since the APIs it uses remain private to the IPMP subsystem. If a (long-overdue) future project moves IP interface configuration into a shared library, then other technologies will also be able to configure IPMP-specific configuration (among other things). Since an IPMP IP interface is configured like any other interface (e.g., through ifconfig and route), the existing Network Management RBAC profile enables configuration and provides the auditing support. - What tunable parameters are exported? No new parameters. However, existing parameters in /etc/default/mpathd remain (see in.mpathd(1M)). As per section 4.12 of the design document, the ipmp_hook_emulation ndd tunable (see PSARC/2007/198) is eliminated. 8. Reliability, Availability, Serviceability (RAS) - Does the project make any material improvement to RAS? Yes -- IPMP is a key networking RAS technology, and this project enables widespread deployment of IPMP. Moreover, the new IPMP model enables RAS tools to transparently operate on IPMP IP interfaces. Also, ipmpstat (section 4.2) and the ability to observe IPMP IP interfaces through snoop (section 4.11) provide notable RAS improvements. - How can users/administrators diagnose failures or determine operational state? (For example, how could a user tell the difference between a failure and very slow performance?) The ipmpstat utility reports a wide range of IPMP subsystem issues, as discussed in section 4.2 of the design document. In addition, as discussed in sections 5.18 and 5.19, the IPMP IP interface provides kstats and MIB II statistics which can be used by bundled tools such as netstat, and any number of third-party utilities. - What are the project's effects on boot time requirements? Some changes to boot are required, as discussed in section 4.3 of the design document. These changes only impact systems using IPMP, and we do not expect they will have any measurable impact on boot time. However, this has not yet been measured. - How does the project handle dynamic reconfiguration (DR) events? See sections 3.6 and 4.4 of the design document. - What mechanisms are provided for continuous availability of service? The core IPMP load-spreading functionality is part of the kernel. The in.mpathd daemon is not yet under control of SMF and thus is not restarted if it crashes. Bringing the daemon under SMF is feasible but would increase the scope of the project beyond its original objectives. - Does the project call panic()? Explain why these panics cannot be avoided. The new IPMP kernel code calls panic() in a few edge cases where there's evidence of memory corruption or other severe kernel bugs (such as the kernel modhash API not meeting its interface guarantees). - How are significant administrative or error conditions transmitted? Issues are communicated via syslog. The IPMP Asynchronous Events interface introduced by PSARC/2002/137 also remains -- along with its classification of Contracted Consolidation Private. - How does the project deal with failure and recovery? Ensuring that network connectivity is unaffected across the failure and recovery of networking hardware is the design center of IPMP. The IP interface configuration itself remains in /etc files (as before), but other projects (e.g., NWAM) aim to move this configuration to SMF and thus provide a facility for checkpointing or rolling back persistent IPMP-related IP interface configuration. - Does it ever require reboot? If so, explain why this situation cannot be avoided. No. - How does your project deal with network failures (including partition and re- integration)? How do you handle the failure of hardware that your project depends on? See failure/recovery question above. - Can it save/restore or checkpoint and recover? See failure/recovery question above. - Can its files be corrupted by failures? Does it clean up any locks/files after crashes? See failure/recovery question above. No lock files are used. 9. Observability - Does the project export status, either via observable output (e.g., netstat) or via internal data structures (kstats)? Yes. - How would a user or administrator tell that this subsystem is or is not behaving as anticipated? Through ipmpstat and the packet monitoring facility; see sections 4.2 and 4.11 of the design document. In addition, traditional network interface tools (netstat, ping, netperf, ...) may be used. - What statistics does the subsystem export, and by what mechanism? Both kstats and MIB II statistics are exported; see sections 5.18 and and 5.19 of the design document. - What state information is logged? No new information is logged, though the wording of some existing log messages has been changed for clarity. The set of log messages is covered in the DIAGNOSTICS section of in.mpathd(1M). - In principle, would it be possible for a program to tune the activity of your project? No programmatic facility is provided for tuning. 10. What are the security implications of this project? - What security issues do you address in your project? None. - The Solaris BSM configuration carries a Common Criteria (CC) Controlled Access Protection Profile (CAPP) -- Orange Book C2 -- and a Role Based Access Control Protection Profile (RBAC) -- rating, does the addition of your project effect this rating? E.g., does it introduce interfaces that make access or privilege decisions that are not audited, does it introduce removable media support that is not managed by the allocate subsystem, does it provide administration mechanisms that are not audited? No. - Is system or subsystem security compromised in any way if your project's configuration files are corrupt or missing? No. - Please justify the introduction of any (all) new setuid executables. None. - Include a thorough description of the security assumptions, capabilities and any potential risks (possible attack points) being introduced by your project. This project does not alter any security assumptions, but it addresses several historical security limitations associated with IPMP groups. For instance, as discussed in section 4.12 of the design document, networking firewalls can now be easily and robustly configured. Moreover, networking security technologies that operate on IP interfaces will be able to transparently work in an IPMP environment. http://sac.sfbay/cgi-bin/bp.cgi?NAME=Security.bp (TBD) 11. What is its UNIX operational environment: - Which Solaris release(s) does it run on? Solaris 11, but we may backport to a Solaris 10 Update. - Environment variables? Exit status? Signals issued? Signals caught? Aside from the removal of the private SUNW_NO_MPATHD environment variable introduced by PSARC/2002/249 (see section 4.3.4 of the design document), nothing is changed. The ipmpstat utility returns the customary exit status values of 0 on success and 1 on failure; it does not use signals. - Device drivers directly used (e.g. /dev/audio)? .rc/defaults or other resource/configuration files or databases? Other than the new project-private /dev/ipmpstub device described in section 5.20 of the design document (which is opened by ifconfig as part of creating an IPMP interface), nothing is changed. - Does it use any "hidden" (filename begins with ".") or temp files? No. - Does it use any locking files? No. - Command line or calling syntax: What options are supported? (please include man pages if available) Does it conform to getopt() parsing requirements? The ipmpstat utility conforms to getopt(); the proposed options are provided in section 4.2 of the design document. The ifconfig utility has never conformed to getopt(), but the proposed "ipmp" subcommand is consistent with the existing ifconfig "grammar". The "ipmp" subcommand is covered in section 4.1.5 of the design document, and the behavior of existing ifconfig subcommands on IPMP IP interfaces is described throughout section 4.1. - Is there support for standard forms, e.g. "-display" for X programs? Are these propagated to sub-environments? N/A. - What shared libraries does it use? (Hint: if you have code use "ldd" and "dump -Lv")? The ipmpstat utility uses libipmp, libsocket, libsysevent, and libnvpair. Other shared library dependencies remain unchanged, with the exception of: * ifconfig and the ip_rcm module, which additionally use libipmp. * if_mpadm and in.mpathd which additionally use libinetutil. - Identify and justify the requirement for any static libraries. N/A. - Does it depend on kernel features not provided in your packages and not in the default kernel (e.g. Berkeley compatibility package, /usr/ccs, /usr/ucblib, optional kernel loadable modules)? No. - Is your project 64-bit clean/ready? If not, are there any architectural reasons why it would not work in a 64-bit environment? Does it interoperate with 64-bit versions? In principle, yes, though no 64-bit versions of libipmp, ifconfig, ipmpstat, or other userland utilities presently exist. - Does the project depend on particular versions of supporting software (especially Java virtual machines)? If so, do you deliver a private copy? What happens if a conflicting or incompatible version is already or subsequently installed on the system? No. - Is the project internationalized and localized? Yes. - Is the project compatible with IPV6 interfaces and addresses? Yes. 12. What is its window/desktop operational environment? N/A - no graphical components are provided by this project. 13. What interfaces does your project import and export? - Please provide a table of imported and exported interfaces, including stability levels. Pay close attention to the classification of these interfaces in the Interface Taxonomy -- e.g., "Committed," "Uncommitted," and "*Private;" see: http://sac.sfbay/cgi-bin/bp.cgi?NAME=interface_taxonomy.bp Interfaces Imported Interface Classification Comments -------------------------------- -------------- ----------------- vni device driver Cons. Private See section 5.20 NIC kstats Committed See section 4.8 sysevent API Committed For PSARC/2002/137 IP module IPSQ framework Project Private See question 14 ARP/IP message-passing API Cons. Private See question 14 modhash kernel API Cons. Private Impl. artifact /etc/hostname[6].* Committed See section 4.3 Interfaces Exported Interface Classification Comments -------------------------------- -------------- ----------------- IPMP IP Interface Committed See section 3.1[1] outbound load spreading behavior Volatile See section 3.2 source address selection behavior Volatile See section 3.3 ifconfig "ipmp" subcommand Committed See section 4.1.5 "ipmp" in /etc/hostname[6].* Committed[2] See section 4.3 networking commands on IPMP and Committed[3] See sections 4.1, underlying IP interfaces 4.4, 4.5, 4.8-4.10 DHCP for IPMP data/test addresses Committed See section 4.13 kstats for IPMP IP interfaces Committed[4] See section 4.8 MIBII stats for IPMP IP interfaces Uncommitted See section 5.18 IPv6 link-local IPMP interaction Committed See section 4.6 IPMP bring-up at boot Project Private See section 4.3 /usr/sbin/ipmpstat Committed See section 4.2 /sbin/ipmpstat alternate location Volatile See section 4.3.4 ipmpstat output modes Committed See section 4.2 (-a, -g, -i, -p, -t) ipmpstat normal output format Not-an-Interface See section 4.2 ipmpstat -P and -F output formats Committed See section 4.2 SIOCG[L]IFCONF and SIOCG[L]IFNUM Committed See section 5.1 IPMP interaction LIFC_UNDER_IPMP Committed See section 5.1 SIOC[GS]LIFFLAGS IPMP interaction Committed[5] See section 5.2 IFF_IPMP Committed See section 5.3 IFF_RUNNING on IPMP IP interfaces Committed See section 5.3 Visibility of IPMP and underlying Committed See section 5.4 IP interfaces via PF_ROUTE SO_RTSIPMP socket option Committed See section 5.4 Assorted "set" SIOC* and PF_ROUTE Volatile See section 5.* ops on underlying IP interfaces SIOC*LIFUSESRC* IPMP interaction Volatile See section 5.14 SIOCGLIFBINDING Project Private See section 5.17 SIOCGLIFGROUPINFO Project Private See section 5.17 lifr_binding lifreq member Project Private See section 5.17 struct lifgroupinfo Project Private See section 5.17 dlpistub kernel module Cons. Private See section 5.20 /dev/ipmpstub Project Private See section 5.20 libipmp APIs Contracted Cons. See section 5.21 /usr/include/ipmp_admin.h Contracted Cons. See section 5.21 if_indextoname() IPMP behavior Committed See section 5.22.1 if_nametoindex() IPMP behavior Committed See section 5.22.1 if_nameindex() IPMP behavior Committed See section 5.22.2 ifaddrlist() enhancements Cons. Private See section 5.22.3 ifaddrlistx(), ifaddrlistx_free() Cons. Private See section 5.22.4 [1] Note that many other sections of the design document provide specific discussion of how the IPMP IP Interface will work with key technologies such as packet monitoring and packet filtering. For all cases where that behavior matches expected IP interface behavior, we do not specifically call it out here. Additional minor differences from traditional IP interface behavior are called out in section 4.1. [2] "Committed" as much as any keyword in the /etc/hostname[6].* files. That is, if a future project were to move IP configuration out of /etc/hostname[6].*, this configuration information would move too. [3] "Committed" so long as the equivalent behavior on a "normal" IP interface is committed. For instance, adding a route through an IPMP IP interface is committed. [4] "Committed" only for documented kstats. [5] "Committed" only for documented flag combinations; see section 5.2 for a full list of combinations. Interfaces Removed Previous Interface Classification Comments -------------------------------- -------------- ----------------- ipmp_hook_emulation ndd tunable Uncommitted See section 4.8 SIOCLIFFAILBACK Project Private See section 5.17 SIOCLIFFAILOVER Project Private See section 5.17 SIOCL[GS]LIFOINDEX Project Private See section 5.17 SIOCSIPMPFAILBACK Project Private See section 5.17 lifr_movetoindex lifreq member Project Private See section 5.17 SUNW_NO_MPATHD env. variable Project Private See section 4.3.4 IP{,V6}_DONTFAILOVER_IF Project Private See section 5.6 IPV6_BOUND_PIF Project Private See section 5.6 Other IPMP-specific project-private APIs that have not been explicitly called out (such as SIOCSLIFGROUPNAME) are unchanged from their earlier ARC'd classifications, but may have minor behavioral changes, discussed in the design document. - Exported public library APIs and ABIs See sections 5.21 and 5.22 of the design document. - Other interfaces Sysevents are used as per PSARC/2002/137. - What other applications should it interoperate with? How will it do so? - Is it "pipeable"? How does it use stdin, stdout, stderr? The ipmpstat utility is pipeable; stdout/stderr have normal semantics. - Explain the significant file formats, names, syntax, and semantics. None. - Is there a public namespace? (Can third parties create names in your namespace?) How is this administered? There are two notable namespaces (both of which already exist) -- the IP interface namespace, and the IPMP group namespace. Both are administered through ifconfig. Section 3.16 in the design document discusses the need for and interaction between the two namespaces. - Are the externally visible interfaces documented clearly enough for a non-Sun client to use them successfully? TBD. 14. What are its other significant internal interfaces inter-subsystem and inter-invocation)? - Protocols (public or private) As before, internally, IPMP administrative utilities (ifconfig, ipmpstat, if_mpadm, ip_rcm, and ifconfig) communicate with in.mpathd through a TCP loopback messaging protocol over port 5999. The details of that protocol are handled by libipmp, which provides a functional interface above it. The IP and ARP modules communicate extensively using a longstanding message-passing interface. Several new message types have been added to allow ARP to track the current set of IPMP groups and active interfaces in each group. - Private ToolTalk usage N/A - Files No additional files. - Other The IPMP subsystem in the IP module enhances and relies extensively on the "IPSQ" synchronization framework to ensure that only a single modification can be made to a given IPMP group at a time. For instance, SIOCSLIFFLAGS operations on two different IP interfaces in the same IPMP group will be serialized with respect to each other. (This has always been the case, but the introduction of the IPMP IP interface has enabled the IPSQ framework code to be simplified considerably.) - Are the interfaces re-entrant? Yes. 15. Is the interface extensible? How will the interface evolve? - How is versioning handled? As can be seen from the interface table, almost all committed interfaces provide IPMP IP interfaces with traditional Solaris IP interface behavior and compatibility with the committed BSD sockets API. Exceptions are: * The "ipmp" ifconfig keyword and associated IFF_IPMP flag -- which have negligible impact on future extensibility. * Visibility of underlying IP interfaces via SIOCG[L]IFCONF and PF_ROUTE sockets (along with LIFC_UNDER_IPMP and SO_RTSIPMP). We have ample operational experience that exposing underlying IP interfaces to IPMP-unaware applications is problematic, so we are convinced we will not need to revisit this issue. * The ipmpstat -P and -F output formats -- which have been carefully specified to allow ipmpstat to evolve without needing change. * The ipmpstat output modes -- which reflect core, immutable IPMP concepts (data addresses, IPMP groups, underlying IP interfaces, probe targets, and probes). Of course, the information displayed for each output mode can and will likely be extended or shrunk as necessary in the future. All other interfaces are either project private, consolidation-private, or contracted consolidation-private, and will be revised in lockstep. - What was the commitment level of the previous version? The commitment levels are specified in numerous IPMP-related PSARC cases (1999/225 1999/637 2000/503 2001/579 2002/137 2002/249 2002/263 2002/278 2002/615 2002/713 2002/742 2002/755 and 2005/341). While the IPMP architecture has changed extensively, it remains compatible with the previous "version" and thus the expectations established by the commitment levels in those cases are satisfied. - Can this version co-exist with existing standards and with earlier and later versions or with alternative implementations (perhaps by other vendors)? N/A - What are the clients over which a change should be managed? N/A - How is transition to a new version to be accomplished? What are the consequences to ISV's and their customers? No explicit transition is needed, nor is there any ISV impact (other than ISV applications now being able to work in an IPMP environment). 16. How do the interfaces adapt to a changing world? A core goal of this project is to allow all IP-based applications to transparently work with IPMP. In particular, since the IP interface abstraction is well-established, modeling an IPMP group as an IP interface (with expected semantics) ensures that key future technologies will automatically work with IPMP. 17. Interoperability - If applicable, explain your project's interoperability with the other major implementations in the industry. In particular, does it interoperate with Microsoft's implementation, if one exists? IPMP is unique to Solaris. However, as before, IPMP can be readily deployed on heterogeneous networks, and will work with any broadcast- capable link-layer medium. - What would be different about installing your project in a heterogeneous site instead of a homogeneous one (such as Sun)? Nothing. - Does your project assume that a Solaris-based system must be in control of the primary administrative node? N/A 18. Performance - How will the project contribute (positively or negatively) to "system load" and "perceived performance"? No impact. - What are the performance goals of the project? How were they evaluated? What is the test or reference platform? Our goals are to not impact boot-time, network throughput, or network latency (both as a host and as a router). - Does the application pause for significant amounts of time? Can the user interact with the application while it is performing long-duration tasks? N/A - What is your project's MT model? How does it use threads internally? How does it expect its client to use threads? If it uses callbacks, can the called entity create a thread and recursively call back? No threads are used. - What is the impact on overall system performance? What is the average working set of this component? How much of this is shared/sharable by other apps? No impact. - Does this application "wake up" periodically? How often and under what conditions? What is the working set associated with this behavior? As before, if probe-based failure detection is enabled, in.mpathd wakes up periodically to send and receive probes. It also responds to routing socket messages from the kernel, and requests from other IPMP components via its internal protocol (see question 14). Also as before, every 20 seconds, it ensures its view of the IPMP subsystem is synchronized with the kernel through a series of socket ioctl operations. The cost and working set of these operations scales with the complexity of the IPMP configuration, but is typically quite small. - Will it require large files/databases (for example, new fonts)? No. - Do files, databases or heap space tend to grow with time/load? No. 19. Please identify any issues that you would like the ARC to address. Does the ARC feel this case is suitable for a micro binding? Why or why not? 20. Appendices to include - Clearview One-Pager: http://sac.sfbay/PSARC/2005/132/20050225_sebastien.roy (see http://opensolaris.org/os/project/clearview/ for more info) - References to other documents.: (Place copies in case directory.) * Design document: ipmp-highlevel-design.pdf * ifaddrlistx() and ifaddrlistx_free() description: ifaddrlistx.txt * Draft manpages are TBD.