1. What specifically is the proposal that we are reviewing? This is a proposal to introduce an SMB/CIFS client to Solaris. The virtual filesystem module and its related device driver will permit users to find and mount filesystems and to access and modify files and directories from Windows machines and from Unix/Linux machines running Samba via the SMB/CIFS protocol. SMB (Server Message Block) and CIFS (Common Internet File System) are used interchangeably or together throughout our materials due to our project history. - What is the technical content of the project? We will introduce the following modules: - an 'smbfs' virtual filesystem kernel module with high-level logic - an 'nsmb' device driver module with most of the CIFS protocol code - an SMB/CIFS-specific mount utility - an SMB/CIFS-specific unmount utility - an SMB/CIFS user-level library, libsmbfs.so - a new SMB/CIFS utility, 'smbutil' - a new SMF service, svc:/network/smb/client:default, to store global state For more information about our deliverables, see the design document [3], section 3. For more information about our interfaces, see the design document [3], section 4. Also see our block diagram in Section 2.3 (also available as [4] in the materials). - Is this a new product, or a change to a pre-existing one? If it is a change, would you consider it a "major", "minor", or "micro" change? We are introducing new functionality to the existing product, Solaris. We currently expect the CIFS client to be eligible for a patch release. - If your project is an evolution of a previous project, what changed from one version to another? N/A - What is the motivation for it, in general as well as specific terms? (Note that not everyone on the ARC will be an expert in the area.) Unix LANs usually use NFS to share remote files, but Windows LANs use SMB/CIFS. Solaris has not included this functionality, so there are files on corporate networks which are not easily accessible from a Solaris machine. Third-party packages exist, but they are not well implemented. We need this stuff in core Solaris. - What are the expected benefits for Sun? We will remove a barrier to new Solaris users and an annoyance to most current Solaris users. See the "Business Justification" section of the requirements document [2] for more information. - By what criteria will you judge its success? We will be successful if customers have usable and reasonably performant access to data on SMB/CIFS servers without having stability problems. For more detailed information, see the requirements document [2]. 2. Describe how your project changes the user experience, upon installation and during normal operation. - What does the user perceive when the system is upgraded from a previous release? After an upgrade, a user will be able to use 'smbutil' to view SMB/CIFS shares and do NetBIOS name lookups, and regular users will be able to use 'mount -F smbfs' to mount such a share on a directory they own. Alternately, an admin will be able to add an automounter map entry for a CIFS share. A mount will fail unless authentication is possible, either via an interactive password, a password stored by 'smbutil login', a hashed password from the user's $HOME/.nsmbrc file, or Kerberos credentials. Once mounted, the user will be able to access and modify files and directories on the SMB/CIFS share. 3. What is its plan? - What is its current status? Has a design review been done? Are there multiple delivery phases? | Design review and (phase 1) functionality are complete. | | We've refined our plan for multiple delivery phases, | where "Phase 1" will deliver all of the functionality | specified in our requirements document [2] except for: | | a: Compliance with the IPv6 "big rule" will not be possible | until we implement "SMB over plain TCP" (in Phase 2) | because NetBIOS only supports IPv4. | b: Automatic connect or reconnect is delayed until Phase 2. | | We do expect to deliver further SMB/CIFS functionality and | integration in future putbacks, with a "Phase 2" delivery | focusing on items mentioned in "What we're not doing" in the | requirements document [2], plus the above items (a, b). 4. Are there related projects in Sun? - If so, what is the proposal's relationship to their work? Which not-yet-delivered Sun (or non-Sun) projects (libraries, hardware, etc.) does this project depend upon? What other projects, if any, depend on this one? The NAS Software group under Barry Greenberg is porting SMB/CIFS server functionality from the Procom NAS platform to Solaris Nevada. We are coordinating to manage duplication of code and functionality and to harmonize things like our places in the SMF namespace. One area of duplication we've discussed is that the CIFS server includes a rudimentary client to talk to domain controllers; this is not real overlap in practice and we will not harmonize until after both projects put back. Another area of duplication is NetBIOS; the CIFS client includes a minimal NetBIOS client-side implementation, while the CIFS server has a server-side NetBIOS implementation and another client- side implementation. There's not very much code involved, but the entanglement of the server code means it would take significant work to unify the client implementations. Due to schedule pressures, we'd prefer to defer the integration of these two NetBIOS implementations into a follow-on project. We called out the Sparks, Winchester and Reno projects as dependencies in our requirements document, but we do not believe our current scope requires their functionality. We do depend on kernel iconv (PSARC 2005/446) and kernel md4 (PSARC 2007/139), both in Nevada, and on some code done for the CIFS server (PSARC 2006/715) for persistent state management in SMF and sharectl(1M). - Are you updating, copying or changing functional areas maintained by other groups? How are you coordinating and communicating with them? Do they "approve" of what you propose? If not, please explain the areas of disagreement. We are delivering changes to sharectl(1) and libshare.so, PSARC 2005/374, and are working with the originating engineer to do so. 5. How is the project delivered into the system? - Identify packages, directories, libraries, databases, etc. | For the contents of our SUNWsmbfs* packages, please see the design document [3], Section 3.3. We will add these packages to the End User metacluster. 6. Describe the project's hardware platform dependencies. - Explain any reasons why it would not work on both SPARC and Intel? We will test and demand identical results on both Sparc and Intel. 7. System administration - How will the project's deliverables be installed and (re)configured? - How will the project's deliverables be uninstalled? | As stated in section 5 above, SUNWsmbfs* should be present on most installs as part of the End User metacluster, and could be deinstalled with 'pkgrm'. Most configuration would be deciding what SMB/CIFS filesystems to mount and whether manual mounts, mounts via /etc/vfstab, or automounter map entries would be the best choice. System-wide defaults such as minimum authentication levels and timeouts may be set via sharectl(1M), with persistent data stored in SMF as properties of the service svc:/network/smb/client:default. - Does it use inetd to start itself? No. - Does it need installation within any global system tables? No. - Does it use a naming service such as NIS, NIS+ or LDAP? Yes, it maps names to addresses via the name service switch first, and also does NetBIOS name lookup queries where appropriate. - What are its on-going maintenance requirements (e.g. Keeping global tables up to date, trimming files)? N/A - How does this project's administrative mechanisms fit into Sun's system administration strategies? E.g., how does it fit under the Solaris Management Console (SMC) and Web-Based Enterprise Management (WBEM), how does it make use of roles, authorizations and rights profiles? Additionally, how does it provide for administrative audit in support of the Solaris BSM configuration? We will add a new rights profile to support users mounting on directories they own and unmounting their smbfs mounts. See the design document [3] section 6.3 for more information. - What tunable parameters are exported? Can they be changed without rebooting the system? Examples include, but are not limited to, entries in /etc/system and ndd(8) parameters. What ranges are appropriate for each tunable? What are the commitment levels associated with each tunable (these are interfaces)? Our only tuning opportunities now are those to set protocol-level timeouts on all shares, on all shares in a workgroup, on all shares on a particular server, or on just one share. These are part of the sharectl(1) extensions we plan. 8. Reliability, Availability, Serviceability (RAS) - Does the project make any material improvement to RAS? No. - How can users/administrators diagnose failures or determine operational state? (For example, how could a user tell the difference between a failure and very slow performance?) Users would use 'snoop' or 'Ethereal' and 'dtrace' to detect forward progress or the lack thereof in the CIFS client. Out SMF service will be used to store system-wide settings, and will be transient, so SMF tools will not provide useful information. - What are the project's effects on boot time requirements? We expect impact to be negligible. - How does the project handle dynamic reconfiguration (DR) events? N/A - What mechanisms are provided for continuous availability of service? N/A - Does the project call panic()? Explain why these panics cannot be avoided. We will call panic() when data structure inconsistency threatens to corrupt data. We will analyze the calls to panic() in the current code base and minimize them as much as possible. We will also have calls to ASSERT() for debugging purposes where we feel it is needed. - How are significant administrative or error conditions transmitted? SNMP traps? Email notification? We will log serious issues via syslog(). - How does the project deal with failure and recovery? See the question about network failures below. - Does it ever require reboot? If so, explain why this situation cannot be avoided. No. - How does your project deal with network failures (including partition and re- integration)? How do you handle the failure of hardware that your project depends on? | This project has no dependency on hardware. Lost requests or | responses cause protocol level retransmissions per SMB/CIFS norms. | | Lost connections require user intervention to unmount and remount | the resources affected. We plan to improve on this in a "phase 2" | project (RFE 6587713). - Can it save/restore or checkpoint and recover? N/A - Can its files be corrupted by failures? Does it clean up any locks/files after crashes? N/A 9. Observability - Does the project export status, either via observable output (e.g., netstat) or via internal data structures (kstats)? Its TCP connections will be visible via netstat as expected. We don't plan to support kstats at this time. SMF will not show significant details of operation. | We do provide "mdb" modules that can display all the | important data structures in the new kernel modules. - How would a user or administrator tell that this subsystem is or is not behaving as anticipated? A combination of dtrace and Ethereal/Wireshark or snoop would be used to diagnose the client. We will add static dtrace probes at key points. - What statistics does the subsystem export, and by what mechanism? N/A - What state information is logged? N/A - In principle, would it be possible for a program to tune the activity of your project? No. 10. What are the security implications of this project? - What security issues do you address in your project? Please see Section 6 "Security Considerations" of our design document [3]. - The Solaris BSM configuration carries a Common Criteria (CC) Controlled Access Protection Profile (CAPP) -- Orange Book C2 -- and a Role Based Access Control Protection Profile (RBAC) -- rating, does the addition of your project effect this rating? E.g., does it introduce interfaces that make access or privilege decisions that are not audited, does it introduce removable media support that is not managed by the allocate subsystem, does it provide administration mechanisms that are not audited? We are aware of no issues here. - Is system or subsystem security compromised in any way if your project's configuration files are corrupt or missing? No, our configuration data is in SMF, and if it was erased, the greatest loss would be that lower authentication levels might be permitted. - Please justify the introduction of any (all) new setuid executables. N/A - Include a thorough description of the security assumptions, capabilities and any potential risks (possible attack points) being introduced by your project. A separate Security Questionnaire http://sac.sfbay/cgi-bin/bp.cgi?NAME=Security.bp is provided for more detailed guidance on the necessary information. Cases are encouraged to fill out and include the Security questionnaire (leveraging references to existing documentation) in the case materials. Projects must highlight information for the following important areas: - What features are newly visible on the network and how are they protected from exploitation (e.g. unauthorized access, eavesdropping) - If the project makes decisions about which users, hosts, services, ... are allowed to access resources it manages, how is the requestor's identity determined and what data is used to determine if the access granted. Also how this data is protected from tampering. - What privileges beyond what a common user (e.g. 'noaccess') can perform does this project require and why those are necessary. - What parts of the project are active upon default install and how it can be turned off. See our security questionnaire [9] for this information. Worth noting here is that we are proposing to leave in the capability of accepting a password on the command line; this is a normal part of the UNC resource name in the CIFS world, and we would like to stay compatible with that behaviour. We also note that smbclient(1) from the Samba suite allows a password on the command line. 11. What is its UNIX operational environment: - Which Solaris release(s) does it run on? Solaris Nevada; later on a Solaris 10 update release. - Environment variables? Exit status? Signals issued? Signals caught? (See signal(3HEAD).) TBD - Device drivers directly used (e.g. /dev/audio)? .rc/defaults or other resource/configuration files or databases? - Does it use any "hidden" (filename begins with ".") or temp files? Aside from our own /dev/nsmb, we don't rely on device drivers. libsmbfs will consult $HOME/.nsmbrc for preferences which may | override system-wide preferences in SMF. See [7] for more | information on these settings. - Does it use any locking files? No. - Command line or calling syntax: What options are supported? (please include man pages if available) Does it conform to getopt() parsing requirements? We plan to be CLIP compliant. See our draft man pages [5] and [6]. - Is there support for standard forms, e.g. "-display" for X programs? Are these propagated to sub-environments? N/A - What shared libraries does it use? (Hint: if you have code use "ldd" and "dump -Lv")? % ldd smbutil libsmbfs.so.1 => (file not found) libsocket.so.1 => /lib/libsocket.so.1 libnsl.so.1 => /lib/libnsl.so.1 libc.so.1 => /lib/libc.so.1 libmp.so.2 => /lib/libmp.so.2 libmd.so.1 => /lib/libmd.so.1 libscf.so.1 => /lib/libscf.so.1 libuutil.so.1 => /lib/libuutil.so.1 libm.so.2 => /lib/libm.so.2 /platform/SUNW,Sun-Fire/lib/libc_psr.so.1 /platform/SUNW,Sun-Fire/lib/libmd_psr.so.1 - Identify and justify the requirement for any static libraries. N/A - Does it depend on kernel features not provided in your packages and not in the default kernel (e.g. Berkeley compatibility package, /usr/ccs, /usr/ucblib, optional kernel loadable modules)? No. - Is your project 64-bit clean/ready? If not, are there any architectural reasons why it would not work in a 64-bit environment? Does it interoperate with 64-bit versions? We're 64-bit ready and will test on amd64, i386 and sparcv9. - Does the project depend on particular versions of supporting software (especially Java virtual machines)? If so, do you deliver a private copy? What happens if a conflicting or incompatible version is already or subsequently installed on the system? N/A - Is the project internationalized and localized? The current code is I18N'ed. - Is the project compatible with IPV6 interfaces and addresses? | The current code supports only NetBIOS-style connections, and | the NetBIOS protocol uses IPv4-specific address formats. | We will therefore NOT be able to comply with the IPV6 | "big rule" in our phase 1 delivery. | | We plan to implement "SMB over Plain TCP" (port 445) in a | phase 2 project, which will lift the IPv4 limitation. 12. What is its window/desktop operational environment? - Is it ICCCM compliant (ICCCM is the standard protocol for interacting with window managers)? - X properties: Which ones does it depend on? Which ones does it export, and what are their types? - Describe your project's support for User Interface facilities including Help, Undo, Cut/Paste, Drag and Drop, Props, Find, Stop? - How do you respond to property change notification and ICCCM client messages (e.g. Do you respond to "save workspace")? - Which window-system toolkit/desktop does your project depend on? - Can it execute remotely? Is the user aware that the tool is executing remotely? Does it matter? - Which X extensions does it use (e.g. SHM, DGA, Multi-Buffering? (Hint: use "xdpyinfo") - How does it use colormap entries? Can you share them? - Does it handle 24-bit operation? None of these issues apply. We do plan to work with the Gnome Nautilus team to get Nautilus to use our code to browse and mount SMB/CIFS shares. | [ Gnome integration is planned for Phase 2. ] 13. What interfaces does your project import and export? - Please provide a table of imported and exported interfaces, including stability levels. Pay close attention to the classification of these interfaces in the Interface Taxonomy -- e.g., "Standard," "Stable," and "Evolving;" see: http://sac.sfbay/cgi-bin/bp.cgi?NAME=interface_taxonomy.bp Use the following format: Interfaces Imported Interface Classification Comments libkrb5 Contract External PSARC 2006/027, in Nevada uconv routines Consolidation Private PSARC 2005/446, in Nevada md4 routines Consolidation Private PSARC 2007/139, in Nevada | kTLI calls Consolidation Private see design [3], 8.5.2.3 PAM module Consolidation Private PSARC 2007/303, dependency Interfaces Exported Interface Classification Comments smbutil Committed mount_smbfs Committed umount_smbfs Committed $HOME/.nsmbrc Committed See nsmbrc(4) for format libsmbfs.so Project Private Will contract with Nautilus smbfs module Consolidation Private nsmb module Project Private SMF service Committed svc:/network/smb/client:default | SUNWsmbfs* Committed Package names - Exported public library APIs and ABIs N/A Protocols (public or private) This project implements the CIFS protocol, incompletely defined by reference [12]. A good book is at reference [13]. Drag and Drop ToolTalk Cut/Paste N/A - Other interfaces - What other applications should it interoperate with? How will it do so? - Is it "pipeable"? How does it use stdin, stdout, stderr? - Explain the significant file formats, names, syntax, and semantics. - Is there a public namespace? (Can third parties create names in your namespace?) How is this administered? - Are the externally visible interfaces documented clearly enough for a non-Sun client to use them successfully? N/A 14. What are its other significant internal interfaces inter-subsystem and inter-invocation)? - Protocols (public or private) - Private ToolTalk usage - Files - Other - Are the interfaces re-entrant? We have libsmbfs.so.1 to format ioctl()s to the nsmb driver (see 8.3.2 in the design document [3]), the ioctl()s available to libsmbfs (see 8.5.2.1 in the design document [3]) and the interface from smbfs to nsmb (see 8.5.2.2 in the design document [3]). 15. Is the interface extensible? How will the interface evolve? - How is versioning handled? We will follow the VFS versioning conventions to deal with VFS changes. The client will negotiate dialects to use with the SMB/CIFS server it talks to, so protocol details can evolve. - What was the commitment level of the previous version? N/A - Can this version co-exist with existing standards and with earlier and later versions or with alternative implementations (perhaps by other vendors)? Yes, the client supports several popular CIFS protocol dialects. - What are the clients over which a change should be managed? All clients which need to support a new protocol dialect. - How is transition to a new version to be accomplished? What are the consequences to ISV's and their customers? New protocol dialects mean new versions of our kernel modules. 16. How do the interfaces adapt to a changing world? - What is its relationship with (or difficulties with) multimedia? 3D desktops? Nomadic computers? Storage-less clients? A networked file system model (i.e., a network-wide file manager)? N/A to the above. 17. Interoperability - If applicable, explain your project's interoperability with the other major implementations in the industry. In particular, does it interoperate with Microsoft's implementation, if one exists? Yes, the major business case is about interoperability with Microsoft. We've listed server interoperability in our requirements document [2]. - What would be different about installing your project in a heterogeneous site instead of a homogeneous one (such as Sun)? You'd actually get to use SMB/CIFS servers based on Windows instead of Samba and NAS 5310. - Does your project assume that a Solaris-based system must be in control of the primary administrative node? N/A 18. Performance - How will the project contribute (positively or negatively) to "system load" and "perceived performance"? The client will occupy space in the page cache, cycles of the CPU and bandwidth on networks used; this should all be in proportion to the data being moved. - What are the performance goals of the project? How were they evaluated? What is the test or reference platform? Our requirements document [2] calls out a simple throughput comparison to CIFS-on-Linux on the same hardware. Since this functionality is not available on Solaris now, the targets are not aggressive. - Does the application pause for significant amounts of time? Can the user interact with the application while it is performing long-duration tasks? Client operations will sleep while waiting for network traffic; these should be noticable to processes waiting for operations to complete, and should be interruptible is most cases, as NFS is. - What is your project's MT model? How does it use threads internally? How does it expect its client to use threads? If it uses callbacks, can the called entity create a thread and recursively call back? | The kernel code will all be MT-safe. The device driver uses a thread | for each connection (a reader), created when the connection is | established. - What is the impact on overall system performance? What is the average working set of this component? How much of this is shared/sharable by other apps? Unknown at this time. - Does this application "wake up" periodically? How often and under what conditions? What is the working set associated with this behavior? N/A - Will it require large files/databases (for example, new fonts)? No. - Do files, databases or heap space tend to grow with time/load? What mechanisms does the user have to use to control this? What happens to performance/system load? N/A 19. Please identify any issues that you would like the ARC to address. - Interface classification, deviations from standards, architectural conflicts, release constraints... - Are there issues or related projects that the ARC should advise the appropriate steering committees? N/A 20. Appendices to include - One-Pager [0]http://sac.sfbay.sun.com/Archives/CaseLog/arc/PSARC/2005/695/20051115_p.kumar - References to other documents - 20 Questions (this document) (also in materials) [1]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/20questions.txt - Requirements Document (also in materials) [2]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/cifs_client_prd.html - Design Document (also in materials) [3]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/CIFS_Design_Doc.html - Block Diagram (also in materials) [4]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/CIFS_Client_diagram.jpg - Draft man pages (also in materials) [5]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/smbutil.1.txt [6]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/mount_smbfs.1m.txt [7]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/nsmbrc.4.txt [8]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/shaetctl.1m.txt - Security Questionnaire (also in materials) [9]http://jurassic.eng/net/nfs-build/export1/projects/cifs/commitment.materials/sec_questions.html - CIFS wiki [10]http://cifs.central.sun.com/wiki/index.php/Main_Page - CIFS client wiki [11]http://cifs.central.sun.com/wiki/index.php/A_native_CIFS_client_for_Solaris - SNIA CIFS Technical Reference [12]http://jurassic.eng/home/thurlow/work/cifs/CIFS_Technical_Reference.pdf - "Implementing CIFS" book by Christopher Hertel [13]http://www.ubiqx.org/cifs/index.html