PSARC Questions Version 1.17 Here is a comprehensive list of questions that ARC members may ask of project presenters. Providing this information in advance of your review will greatly simplify the ARC's process of identifying the critical information relevant to your project. It is expected that many of these issues may be unresolved at the time of an inception review. However, they should be answerable at commitment review, and may be addressed at inception if significant. Please make your answers concise! Most if not all of these questions will be addressed in related documents such as 1-pagers, specs, design documents, etc. There is no reason to duplicate effort, and "see section 3.2 in the design spec" is an excellent answer. Of course, the referenced material must be provided with your submission. Entire sets of questions may be N/A for your project. For example, device drivers rarely have GUIs, and so the entire GUI section can just be deleted. In such cases, PLEASE NOTE N/A FOR THE MAIN QUESTION, AND DELETE THE REST OF THAT QUESTION SET. This questionnaire is meant to provide the ARC with an overview of your project, and it touches on the main areas of architectural interest. This template will be revised based on its users' experiences; your comments and suggestions are welcome. Send them to john.plocher@sun.com. For advice about architectural choices, pointers to various SAC guidelines, and other project considerations including Licensing and Patents, see http://sac.sfbay/arc/ARC-Considerations.html ------------------------------------------------------------------------ 1. What specifically is the proposal that we are reviewing? Details of the configuration mechanism proposed by the Brussels Project, including all the components, is available in [3]. PSARC/2007/429 will deliver the Framework delivery component, providing the kernel and user-space components of a configuration mechanism for administering network drivers through the GLDv3 framework. - What is the technical content of the project? A description of the Project's architecture is available in [2]. The PSARC/2007/429 framework component will deliver the functionality described in Section 3 of [2], which can be summarized as follows: GLDv3 drivers will be able to register callback functions to be invoked for setting/getting property values. Dladm/libdladm will be modified to issue system calls to set/get property values via the dld layer. - Is this a new product, or a change to a pre-existing one? If it is a change, would you consider it a "major", "minor", or "micro" change? This is a change to a pre-existing product. The project will make an attempt to clean up ambiguous or confusing usage in existing driver-configuration syntax (see, for example, CR 6565373 "driver ndd parameter behavior does not match the ieee802.3(5) man page.") so we would like to target a "minor" binding. - If your project is an evolution of a previous project, what changed from one version to another? The project is an extension of existing GLDv3 interfaces introduced by PSARC 2004/571 ("Nemo - a.k.a GLD v3"). Section 3 of [2] provides details of the proposed extensions. - What is the motivation for it, in general as well as specific terms? Section 1 of [2] describes the motivation for this project. - What are the expected benefits for Sun? The usage of dladm for GLDv3 driver configuration is conformant with trends followed by recent features like Wifi (PSARC 2006/406) and Link Aggregation. The consistent call-interface it provides across network drivers can be leveraged by link configuration tools for layered services like NWAM (PSARC 2007/132). As a side-effect, it allows cleaner driver implementation by removing the need for drivers to implement ndd(1m) support. The proposal will provide a property-manipulation method that allows property settings to persist across a driver restart, while providing a flexible way of adjusting properties dynamically. - By what criteria will you judge its success? o the framework supports ability to deliver property updates to GLDv3 drivers, o multiple GLDv3 drivers have been converted to use the new framework Also see response to question 3. 2. Describe how your project changes the user experience, upon installation and during normal operation. What does the user perceive when the system is upgraded from a previous release? Users will have a more consistent and flexible model for administering network interfaces. The user will be able to use the dladm set-linkprop and show-linkprop sub-commands to manage link properties of all GLDv3 drivers. The ability to use existing support for configuration methods like driver.conf or ndd will be unchanged after PSARC/2007/429. However, as each network driver is converted to export callbacks for the Brussels framework, existing ndd handlers will emit a warning message (content TBD) similar to the one used for PSARC/2003/166. The warning will advise the user that the dladm(1m) command is the recommended method for administering the property. Any ndd tunables that are not supported as Public properties (e.g., non-MII properties) will be handled as Private properties in the driver as shown in Appendix B of [2], and the emitted warning for these cases will redirect the administrator to dladm(1m). 3. What is its plan? - What is its current status? Has a design review been done? Are there multiple delivery phases? Phase I of Project Brussels addresses property administration at the network driver layer (Layer 2 of the network stack). Deliverables for Phase I are described in [3]. This PSARC case, PSARC/2007/429, delivers the framework components of Project Brussels, including enhancements to the GLDv3 framework to support property management, along with the conversion of GLDv3 drivers to use the framework. Framework delivery for Phase I of Brussels will be done when - the framework supports ability to deliver property updates to GLDv3 drivers, - at least one GLDv3 driver (bge) has been converted to use the new framework, Note that Phase I itself will be completed when all the components described in Section 3 of the Umbrella Case described in [3] are delivered. Subsequent Phases of the Project will address deficiencies in the management model for tunables at the TCP/IP layer of the network stack and will be covered by other appropriate ARC cases. A design review for Phase I of Project Brussels has been conducted on opensolaris.org, and a prototype for the project has been developed. The project is poised to transition into the development phase. 4. Are there related projects in Sun? Yes. - If so, what is the proposal's relationship to their work? Which not-yet- delivered Sun (or non-Sun) projects (libraries, hardware, etc.) does this project depend upon? What other projects, if any, depend on this one? * Clearview Nemo unification and vanity naming (PSARC/2006/499) The Vanity Naming component of the Clearview project, which is also an effort to provide an improved administrative model for network interfaces, allows network interfaces to be given administratively chosen names. The two projects have some common requirements, including a daemon that manages link configuration information and acts as the interface to the repository for managing persistent configuration. For both Clearview and Brussels, the daemon provides initial configuration information needed when the driver starts, and, in both cases, the daemon must be started/stopped/restarted using the SMF framework. The requirements imposed by Brussels on the daemon are described in Section 4 of [2], and will be implemented as part of the "persistent property management" component of Project Brussels (See Umbrella Case definition in [3]). In addition, PSARC/2006/499 introduces the softmac module that registers "soft" MAC service providers to the kernel mac module on behalf of underlying legacy (non-GLDv3) devices. The softmac module will be used by Brussels for administering Public properties (defined in Section 3.2.3 of [2]) of non-GLDv3 drivers. * Clearview IP Tunneling Device Driver This part of the Clearview Project implements a Nemo based device driver for IP Tunneling. The Brussels framework will provide a simplification over the discrete ioctls issued from dladm for property management of IP tunnel devices. Note, however, that this is not a case dependency between the two projects. Brussels will simply improve the implementation of the ioctl. * Crossbow (PSARC/2006/357) Crossbow is a network virtualization project which allows effective sharing of physical networking resources among multiple user. It allows administrator to create multiple data devices (VNICs) to map to a single physical MAC instance. Crossbow implements a Nemo based device driver for VNICs. As with the Clearview IP Tunnel driver, the implementation of property management of VNICs will be simplified by the Brussels Project. * NWAM (PSARC/2007/132) Network Auto-Magic is a project to simplify and automate network configuration on Solaris. Brussels will provide a consistent, unified interface to managing commonly configured driver properties (e.g., interface MTU, flow-control, speed/duplex configuration) that would otherwise require implementation of driver-specific system calls from the NWAM GUI/CLI. - Are you updating, copying or changing functional areas maintained by other groups? How are you coordinating and communicating with them? Do they "approve" of what you propose? If not, please explain the areas of disagreement. This project will require updates to the GLDv3 layer as well as to network drivers, and the iteam involves members of both groups. 5. How is the project delivered into the system? - Identify packages, directories, libraries, databases, etc. Through existing packages. 6. Describe the project's hardware platform dependencies. None. 7. System administration - How will the project's deliverables be installed and (re)configured? The deliverables from this project will be installed using the standard Solaris package utilities. - How will the project's deliverables be uninstalled? The deliverables are part of the base system and cannot be uninstalled. - Does it use inetd to start itself? No. - Does it need installation within any global system tables? No. - Does it use a naming service such as NIS, NIS+ or LDAP? No. - What are its on-going maintenance requirements (e.g. Keeping global tables up to date, trimming files)? None. - How does this project's administrative mechanisms fit into Sun's system administration strategies? E.g., how does it fit under the Solaris Management Console (SMC) and Web-Based Enterprise Management (WBEM), how does it make use of roles, authorizations and rights profiles? Additionally, how does it provide for administrative audit in support of the Solaris BSM configuration? The Project provides a cleaner administrative interface that may be accessed through library routines in libdladm. Thus SMC or WBEM may acquire appropriate interface contracts to access the Brussels administrative interface. However, we envision that the administrative facilities provided by NWAM will be the biggest beneficiary of the new interfaces, which will allow them to manage link properties more efficiently. Note that dladm is currently part of Network Management execution profile, and users must be granted access to a role with that profile in order to successfully invoke its subcommands. - What tunable parameters are exported? Can they be changed without rebooting the system? Examples include, but are not limited to, entries in /etc/system and ndd(8) parameters. What ranges are appropriate for each tunable? What are the commitment levels associated with each tunable (these are interfaces)? Project Brussels will export several tunable parameters. The semantics of each of these tunables will be determined as the design evolves, and will be appropriately documented as they evolve. Some of these parameters are listed in Section 3.2.3-3.2.4 and Appendix C of [2]. Each tunable will be modifiable without requiring a reboot. 8. Reliability, Availability, Serviceability (RAS) - Does the project make any material improvement to RAS? No. - How can users/administrators diagnose failures or determine operational state? (For example, how could a user tell the difference between a failure and very slow performance?) In addition to the traditional network interface tools (netstat, ping, snoop, ..), the dladm command can be used to track the state of GLDv3 links. - What are the project's effects on boot time requirements? No noticeable effect on boot time is expected. - How does the project handle dynamic reconfiguration (DR) events? N/A. - What mechanisms are provided for continuous availability of service? N/A. - Does the project call panic()? Explain why these panics cannot be avoided. No. - How are significant administrative or error conditions transmitted? SNMP traps? Email notification? Errors in property configuration are transmitted to libdladm and reported via dladm. - How does the project deal with failure and recovery? Failure and repair of network links will be handled through existing Solaris networking RAS technologies. The configuration files and repositories currently being used by dladm will be used to deal with recovery of property settings on restart. - Does it ever require reboot? No. - How does your project deal with network failures (including partition and re- integration)? How do you handle the failure of hardware that your project depends on? See failure/recovery question above. - Can it save/restore or checkpoint and recover? See failure/recovery question above. - Can its files be corrupted by failures? Does it clean up any locks/files after crashes? See failure/recovery question above. The configuration files (/etc/dladm/*.conf) files that track dladm configuration information will be managed by libdladm, and may not be manually edited. If the file is corrupted, the system and network devices should still work, though the dladm configuration information may be lost as a result of the corruption. 9. Observability - Does the project export status, either via observable output (e.g., netstat) or via internal data structures (kstats)? Current state of property configuration will be displayed via the dladm sub-commands as described in Section 3.2.5 of [2] and Section 4.1 of [4]. - How would a user or administrator tell that this subsystem is or is not behaving as anticipated? In addition to the traditional network interface tools (netstat, ping, snoop, ..), the dladm command can be used to track the state of GLDv3 links. - What statistics does the subsystem export, and by what mechanism? See Section 3.2.5 of [2]. - What state information is logged? None. - In principle, would it be possible for a program to tune the activity of your project? N/A. 10. What are the security implications of this project? None. 11. What is its UNIX operational environment: - Which Solaris release(s) does it run on? Solaris Nevada. - Environment variables? Exit status? Signals issued? Signals caught? (See signal(3HEAD).) N/A - Device drivers directly used (e.g. /dev/audio)? .rc/defaults or other resource/configuration files or databases? None. - Does it use any "hidden" (filename begins with ".") or temp files? No. - Does it use any locking files? No. - Command line or calling syntax: What options are supported? (please include man pages if available) Does it conform to getopt() parsing requirements? The project will introduce new subcommands to dladm(1M). Details are available in Sections 4 and 5 of [4]. - Is there support for standard forms, e.g. "-display" for X programs? Are these propagated to sub-environments? N/A. - What shared libraries does it use? (Hint: if you have code use "ldd" and "dump -Lv")? No changes to existing dependencies. - Identify and justify the requirement for any static libraries. None. - Does it depend on kernel features not provided in your packages and not in the default kernel (e.g. Berkeley compatibility package, /usr/ccs, /usr/ucblib, optional kernel loadable modules)? N/A. - Is your project 64-bit clean/ready? Yes. - Does the project depend on particular versions of supporting software (especially Java virtual machines)? No. - Is the project internationalized and localized? Yes. - Is the project compatible with IPV6 interfaces and addresses? Yes. 12. What is its window/desktop operational environment? N/A -- no graphical components are provided by this project. 13. What interfaces does your project import and export? - Please provide a table of imported and exported interfaces, including stability levels. Pay close attention to the classification of these interfaces in the Interface Taxonomy -- e.g., "Committed," "Uncommitted," and "*Private;" see: http://sac.sfbay/cgi-bin/bp.cgi?NAME=interface_taxonomy.bp Use the following format: IMPORTED INTERFACES: Interface Classification Comments ------------------------------------------------------------------------------ GLDv3 MAC interfaces Consolidation Private PSARC 2006/249 libdladm API Cons. Private PSARC 2004/471 mac_maxsdu_update Cons. Private Clearview IP Tunneling (to be ARC'ed) EXPORTED INTERFACES: Interface Classification Comments ----------------------------------------------------------------------------- mac_register Consolidation Private (modified) mac_callbacks_t Consolidation Private (modified) MC_SETPROP Consolidation Private Section 2.1 of [4] MC_GETPROP Consolidation Private Section 2.1 of [4] mc_setprop Consolidation Private Section 2.2 of [4] mc_getprop Consolidation Private Section 2.3 of [4] dld_ioc_prop_val_t Consolidation Private Section 3.2 of [4] DLDIOCSETPROP Consolidation Private DLDIOCGETPROP Consolidation Private DLD_PROP_PRIVATE Consolidation Private DLD_PROP_* Consolidation Private Prefix for Brussels properties. dladm_is_wlan_prop Consolidation Private dladm_get_single_mac_stat Consolidation Private default_mtu Committed flowctrl Volatile ifspeed Committed link_duplex Committed link_up Committed adv_autoneg_cap Committed PSARC 2003/581 adv_asmpause_cap Committed PSARC 2003/581 adv_pause_cap Committed PSARC 2003/581 adv_1000fdx_cap Committed PSARC 2003/581 adv_1000hdx_cap Committed PSARC 2003/581 adv_100fdx_cap Committed PSARC 2003/581 adv_100hdx_cap Committed PSARC 2003/581 adv_10fdx_cap Committed PSARC 2003/581 adv_10hdx_cap Committed PSARC 2003/581 ndd on data-link drivers Obsolete replace with dladm Proposed changes to dladm(1M) (Uncommitted) current subcommand comments -------------------------------------------- show-linkprop support for -o option; See Section 4.2 of [4] show-ether New; See Section 4.1 of [4] show-dev support for -o option; See Section 4.2 of [4] show-secobj support for -o option; removal of undocumented -d option; See Section 4.2 of [4] Proposed changes to ieee802.3(5) (Uncommitted) - ieee802.3(5) currently prescribes the usage of ndd(1m) for setting MII properties. This will be replaced by a prescription to use dladm(1m) - Descriptions of pause/asmpause parameters in ieee802.3(5) will be clarified. See [5] - {adv,lp,cap,link}_cap_{asm}pause parameters will be marked read-only via ndd/kstat. - Exported public library APIs and ABIs Protocols (public or private) Drag and Drop ToolTalk Cut/Paste - Other interfaces N/A - What other applications should it interoperate with? How will it do so? See answers to Question 4. - Is it "pipeable"? How does it use stdin, stdout, stderr? Yes, via existing stdin/stdout/stderr handling already supported by dladm. - Explain the significant file formats, names, syntax, and semantics. None. - Is there a public namespace? (Can third parties create names in your namespace?) How is this administered? Yes, see Section 3.2.1 (page 9) and Section 3.2.4 of [2]. - Are the externally visible interfaces documented clearly enough for a non-Sun client to use them successfully? Administrative interfaces will be documented in the Solaris "System Administration Guide: IP Services". Driver interfaces will be documented on OpenSolaris in the Nemo Design document at http://opensolaris.org/os/project/nemo/nemo-design.pdf 14. What are its other significant internal interfaces inter-subsystem and inter-invocation)? - Protocols (public or private) N/A - Private ToolTalk usage N/A - Files N/A - Other N/A - Are the interfaces re-entrant? Yes. 15. Is the interface extensible? How will the interface evolve? - How is versioning handled? The interfaces exported by this project (including libdladm and other kernel interfaces) are either project-private or consolidation-private, they will be revised together with their consumers when they evolve. Data passed from dld to the driver via the setprop/getprop ioctl carries a version number that may be used by the driver to identify the format of the data (Section 3.2.1 of [2]). The dladm(1M) command will evolve in accordance with the Interface Taxonomy policy. - What was the commitment level of the previous version? N/A. - Can this version co-exist with existing standards and with earlier and later versions or with alternative implementations (perhaps by other vendors)? Yes. - What are the clients over which a change should be managed? Device drivers should follow the requirements listed in Section 3 of [2] Drivers written after the delivery of PSARC 2007/429 will not have to support ndd(1m) interfaces any more, but should use the dladm(1m) interface by export mc_setprop/mc_getprop entry points. Wherever possible, driver properties must be implemented using modular design so that the property may be reset dynamically without requiring a reset of the driver. - How is transition to a new version to be accomplished? What are the consequences to ISV's and their customers? See answer to versioning question above. An unrecognized version number should be rejected by the driver with error codes as described in Section 3.2.3 of [2]. 16. How do the interfaces adapt to a changing world? See answer to question about interaction with administrative strategies in Question 7. This project allows layered administrative products to apply changes to all network links, instead of being constrained to per-interface system-call incantations for a restricted set of links. - What is its relationship with (or difficulties with) multimedia? 3D desktops? Nomadic computers? Storage-less clients? A networked file system model (i.e., a network-wide file manager)? PSARC 2007/429 Framework delivery does not alter the behavior of the system in the environments under question. 17. Interoperability N/A. 18. Performance - How will the project contribute (positively or negatively) to "system load" and "perceived performance"? The framework should not cause any noticeable degradation of network performance. - What are the performance goals of the project? How were they evaluated? What is the test or reference platform? See answer to previous question. - Does the application pause for significant amounts of time? Can the user interact with the application while it is performing long-duration tasks? No. - What is your project's MT model? How does it use threads internally? How does it expect its client to use threads? If it uses callbacks, can the called entity create a thread and recursively call back? The project will be fully MT. - What is the impact on overall system performance? What is the average working set of this component? How much of this is shared/sharable by other apps? N/A. - Does this application "wake up" periodically? How often and under what conditions? What is the working set associated with this behavior? N/A - Will it require large files/databases (for example, new fonts)? No. - Do files, databases or heap space tend to grow with time/load? What mechanisms does the user have to use to control this? What happens to performance/system load? No. 19. Please identify any issues that you would like the ARC to address. - Interface classification, deviations from standards, architectural conflicts, release constraints... - Are there issues or related projects that the ARC should advise the appropriate steering committees? 20. Appendices to include [1] one-pager (http://sac.sfbay/arc/PSARC/2007/429/20070725_sowmini.varadhan) [2] "Brussels- NIC configuration Design Specification" http://opensolaris.org/os/project/brussels/files/brussels.pdf [3] "Brussels Umbrella Document" http://opensolaris.org/os/project/brussels/files/brussels-umbrella.txt [4] "PSARC 2007/429 Brussels Framework Interfaces Specificaiton" [5] Revised ieee802.11 manpage: manpages/ieee802.3.5[.orig, .diff] [in materials] [6] Revised dladm manpage: manpages/dladm.1m[.orig, .diff] [in materials] [7] Revised bge manpage: manpages/bge.7d[.orig, .diff] [in materials]