| C A R T E L / 9 7
   | - - - - - - - - -
   |
   | University of Michigan, Ann Arbor
   | June 24-25th, 1997
 


Meeting notes from June 25th -- morning

(from Scott Brylow, Stanford - edited by Gavin Eadie, UMich)

File Systems

AFS

MIT servers - just removed DECstations from their primary cell. Almost exclusively Sparc5 with 2GB disks running sunOS. Want to go to Solaris so partitions can be >2GB. Recent problems with disks dying. Thinking about RAID technology. Cell (athena.mit.edu) is about 200GB, 85% full. Total of 40 AFS servers. How much user space? 12.5MB, want to go to 15. High-usage folks can get space on a separate volume for the term.

CMU uses ADM(?) to manage project volumes -- for the semester, later they get blown away. Default student is 9MB, default staff is 15. Hope to bump that up to 25MB for everyone. Drives dying problem was reduced when they got a UPS. Cell is about 70GB used, 200GB available in computing services. Computer science has 500-600 GB that's 80-85% full. CompSci offers default user quota of ~10MB, max of 100MB. Privelege delegation by large server -- currently lets people manage their own quotas. Moves volumes if >they run out of room, etc. Peter from CMU.

Stanford has about 15 AFS servers. Retired AIX and DEC servers recently. Pure Sun now. 3 Sparc10's for DB and K servers -- 15 Sparc20's for file servers. 6 big raid arrays populated with 2GB drives. 2 storage works with about 54GB. Raid 5, slow with Veritas doing RAID stuff in s/w. I/O sufficient to support 4500 people at a time. Access to 400GB of data center big disk space. Up to 1 TB by end of summer. Default of 10MB, students can get add'l class-based quota. Nightly updates, to manage quota changes due to class status changes. Some provisions for groups and depts for web space - typically 5MB. Profs can sign a form, get up to 100MB for students, etc. Desparate people can rent a chunk of space -- maybe $7/MB? DEC storageworks are amanzing. Old but newer controllers, dual redundant, etc. DEC storage works had a Loss of cache battery and did not failover to other controller, instead it shut down the entire box

UM misconfigured some drives a few years ago due to poor installation and overheating problems.

AFS backup

MIT using old backup from AFS3.1 or so. Thinking about using the stuff Transarc ships with 3.4a. Couple machines, one DLT stacker with 10 tape drives. Perl scripts wrapped around

CMU has grander vision than MIT, not just a new AFS backup system, but a new backup system for AFS, workstations, eventually desktop machines, etc. It's called the Unified Backup System. Right now they are running homebrew. Knows volumes to backup. Backup volumes are cloned, dumped, spooled to disk. Same machine has a process that takes spooled vols and writes them out. Decouples tape from network problems, improves performance and reliability. Uses no Transarc pieces. Dumps and restores directly. Do daily, weekly, monthly incremental. Full every 6 months. It knows how many incrementals are needed to restore, doesn't place the volume on line until all are up. 3-Exabyte 8200's total 16GB storage. Running on the edge.

Stanford uses Legato's product and BoxHill asm. Supported, all backups under this system. Volume locking problems is not a problem cuz of these products. Box Hill not interested in writing another asm for us. Sparcs, 4 exabyte jukeboxes. Daily incremental backups, etc. Happy with RAID but note that power loss forces serious fsck, which, combined with salvage, is up to 90 minutes. 5k-8k clients with open IP connections to a server, over 10k when busy. Servers on 2 dedicated FDDI rings.

Other tools (CMU) Bud Tool or Alexandria. Give them dumps, they handle tape management if you give them tapes. Looking at comm'l backup pkgs limits on db entries - requires a lot of physical memory to load it.

UM engin using latest transacr code and DLT stackers. Looked at Legato. Pricey. Has license with HP, looking at OmniBack. Addresses a bunch of clients (Netware, Unix, Mac) needs work to work with AFS but it looks do-able. New release in 30-60 days. Right now, kind of homegrown, Perl scripts, etc. SW trees only get backup once/month. Sparc20's (4) with towers on SCSI controllers. Network, not local backups..ring/FDDI ring attached. 50 clients per server.

UM multiple tape drives connected to multiple servers, doing periodic backups. Don't know more. They want backup and recovery to be faster. Right now slowdown is (don't know where) UM honeyman thinks about pitching AFS once in a while. Thinks DFS is not an option at all, so that would leave him with NFS. Worried about clients / server for AFS and NFS.

Peter - Larry Houston thesis on disconnected AFS. Let's you use cache contents when you are offline. Didn't touch the servers, so it works with any. Replay daemon down at the RPC level.

Peter thinks it's possible to use new versions of NFS. UM Engin guy thinks that DFS is not materializing soon enough. Though it seemed sufficiently reliable, faster, etc. But all the stuff underneath it is a serious hassle. DFS doesn't offer much more than AFS?? Stanford, MIT disagree.

MIT thinks AFS is more feature rich, so it's not just a capacity thing.

AFS implementation

Stanford wants to stay here as long as possible,

DFS

Stanford wants to go there when it is stable.

NFS

Peter loves it.

NSS

Huh??

 

EMAIL AND DIRECTORY SERVICES

POP and/or IMAP

MIT is POP only

Cornell is POP only

Stanford, CMU, UM both

CMU has a server that exports both just fine. Biggest problem is quality of IMAP clients.

Stanford mostly POP, felt some pressure to go to IMAP4

UM pressured to go to POP cuz of roving students, PINE is close to a standard.

UM Engin find that PINE is most popular mailer? Hacked up usr/ucb/mail to be a POP client. Want to use AFS as much as possible and POP lends itself to that.

Someone has Kerberized clients for IMAP.

Netscape isn't there yet.

Stanford was talking to Portola who was Kerberizing their IMAP server -- NS bought them we're waiting to see what will happen. 5000 on a server Since Eudora is so popular, they say IMAP doesn't support everything they need, so they'll come out with their own IMAP server -- they want to have similar functions to dicsonnected AFS. (CMU? MIT?)

UM figured when they started that potential base was large. Couldn't run just one server for everyone. Have a different logical server for the first letter of your login name. Currently they have 9 servers that the 26 logical ones map to.

CMU IMAP4 doesn't say anything about where your server is -- IMSP was supposed to solve that but was shot down as too specific by IETF. ACAP protocol is the fallback.

CMU thinking of proxy front end for users so they can have whatever they want behind it.

UM - Is anyone having Eudora response time problems? Slowness? 6000 people on a server on some sort of Sparc, not Ultras.

UM Engin (Paul) 1 POP server for Engin, 6k-9k mboxes. Did some optimizations to speed up mail handling. Mailbox format on server? Evil ucb mbox format at UM

MIT - POP based on Marshall Rose MH. Berkeley mail format is much better.

UM Citi uses inc, mh, kpop.

UM says location transparency is important. (Peter)

Apple Developers Conference in May - Steve Jobs said all email clients suck. (Gavin raised this) Gavin thinks Eudora is too multifunction. Transport, filter, process, archive, etc --

Good calendaring protocol would be useful.

Transition to IMAPv4 (or other nextgen mail)

Dealing with NS (and other clients we don't like)

 

Directories and X.500

UM - any pressure for integrating email and directory services??

Small mailing software -- does name completion and looks up valid address.

This would be nice on a big system.

Ford has a mail system now that does name completion and they are now looking at LDAP

UM believes that NS, MS will totally integrate dir services and LDAP stuff with email in the next year.

Stanford said fuzzy matching is bad. UM copyright guy said that it's needed for a community of 100k.

UM worried about too much access to info of students in directories -- like which classes they are in when. What their email address is - UM is going to ACL the email info so that spam goes down.

Stanford supports vacation messages with a web page.

MIT doesn't support them at all.

Mail Delivery

Sendmail

CCMail

Synching data

Stanford has 1 directory. (1 LDAP, 1 slapd) replicated --

Data is stored elsewhere

Compliments on the system Craig outlined.