Just a reminder. We will have our regular [[community meeting]] on November 23 at 11 AM Pacific Time.
Please join us at IRC #setiquest on Freenode.
Agenda item: update on conduit - what does Gerry and Rob need to move forward (other than time)?
Also, does anyone have experience with streaming/scalability?
(8:00:37 PM) Avinash: Good morning / afternoon / evening everyone
(8:00:50 PM) afeder: 'morning
(8:00:54 PM) Jill [62cf3ddc@gateway/web/freenode/ip.188.8.131.52] entered the room.
(8:01:00 PM) Avinash: Anyone new, who wants to introduce themselves?
(8:01:01 PM) MichaelM: hi yall
(8:01:05 PM) sigblips: Hello.
(8:01:33 PM) Avinash: Listing agenda items for today?
(8:01:59 PM) jrseti [d12163a2@gateway/web/freenode/ip.184.108.40.206] entered the room.
(8:02:10 PM) afeder: 0) quick request: Avinash can you make a 'tasks' forum? http://setiquest.org/forum/topic/task-or-problems-list#comment-1520
(8:02:11 PM) gliese581: Title: Task or problems list | setiQuest (at setiquest.org)
(8:02:19 PM) MichaelM: +1
(8:02:30 PM) Avinash: Anders - will do.
(8:02:34 PM) afeder: great
(8:02:51 PM) Avinash: A topic under general, or a whole new forum?
(8:02:59 PM) afeder: new forum
(8:03:03 PM) Avinash: OK.
(8:03:29 PM) Avinash: 1) http://setiquest.org/forum/topic/ata-cloud-conduit?
(8:03:30 PM) gliese581: Title: ATA-to-cloud conduit | setiQuest (at setiquest.org)
(8:03:37 PM) Avinash: Item 2) ?
(8:04:21 PM) MichaelM: Is a discussion on anything you got at TED timely and appropriate?
(8:05:05 PM) Avinash: Yes, we could. The big thing there was a discussion with the Zooniverse team. I am still waiting to engage with them fully.
(8:05:16 PM) sigblips: 2) Amazon's AWS donation. What is included? What are the limits / quota?
(8:05:29 PM) Avinash: And, I wanted to report only after we have something concrete.
(8:05:43 PM) Avinash: OK. 2) AWS, 3) ?
(8:06:49 PM) afeder: thats is
(8:06:51 PM) afeder: it*
(8:06:53 PM) Avinash: OK. AWS is the easy part. Let me address that, and then move on to the others.
(8:07:52 PM) Avinash: They seem to be flexible in what we do. We can use the GPU cluster for small applications (not defined). The process is to for us to inform Kurt, our primary contat there.
(8:08:13 PM) Avinash: My guess is that this process applies to anything out of the ordinary.
(8:08:37 PM) Avinash: So far, we have AWS machine instances (small, and medium, I think), upload, download, and storage.
(8:08:53 PM) Avinash: Is there anything else that we need from them?
(8:09:24 PM) Avinash: In terms of limits, there has been some discussion on the exact amounts - 1TB download (?) etc.
(8:09:27 PM) sigblips: Can you use the extra double super duper mega memory high compute instances?
(8:09:52 PM) Avinash: However, they are monitoring our usage, and want an explanation from us on how we are using it.
(8:10:05 PM) Avinash: What do we need the extra double ... for?
(8:10:43 PM) Avinash: While right now it is not included, I am sure that for a good reason, they will allow us.
(8:11:01 PM) afeder: nice
(8:11:13 PM) sigblips: Lot's of things. Anything high compute related. A cloud SonATA for example.
(8:11:39 PM) Avinash: Are we running into limits with the medium instances?
(8:12:13 PM) Avinash: Just wonering if anyone has tried it with the setiData we have put there.
(8:12:21 PM) Avinash: *wondering
(8:12:32 PM) sigblips: I've been playing around with the free AWS micro instances. They work but the compute performance is horrible.
(8:13:20 PM) afeder: how many medium instances can we have running at a time though?
(8:13:41 PM) Avinash: How long does it take to process one data set? I ask, because if we are doing "batch" operation, with one data set at a time, we can be much more relaxed about response time. It is the real-time data where performance becomes important.
(8:13:57 PM) Avinash: Afeder: As many as we want - subject to our limits, which are very hight.
(8:13:59 PM) Jill: what makes the performance horrible?
(8:14:20 PM) Avinash: But, both we and they want to be sure that we are using their services appropriately.
(8:14:26 PM) afeder: sure
(8:14:35 PM) afeder: i doubt it will be a problem for the foreseeable future
(8:15:16 PM) afeder: next item?
(8:15:23 PM) Avinash: Do we wan to try out their GPU cluster?
(8:15:38 PM) Avinash: SOmeone has to port the software to run on GPU's. I don't know how difficult that is.
(8:15:43 PM) sigblips: The micro instance's CPU is a 2.6 GHz Xeon which is very fast. What makes the performance horrible is that your CPU slice is throttled among many users. I think they've over-sold capacity.
(8:16:17 PM) Jill: interesting to know that
(8:16:31 PM) Avinash: But, unless we do real-time, is this really a problem?
(8:17:19 PM) sigblips: Performance is also highly variable. For example a 16 second CPU job sometimes take 16 seconds and it sometimes take 300+ seconds. It is random.
(8:17:39 PM) Jill: well a GPU port would be all about investigating the enhanced speed that's possible, so it's necessary to compare apples and apples, not apples and apple slices
(8:18:21 PM) Avinash: Slicing is part of the game with cloud computing.
(8:18:47 PM) Avinash: Once we have software running on the slice, maybe we could take it to nVidia, for performance testing.
(8:18:50 PM) Avinash: Just a thought.
(8:20:56 PM) Avinash: Shall we move on?
(8:21:01 PM) ***afeder nods
(8:21:19 PM) ***MichaelM will nod as soon as his slice comes around
(8:21:24 PM) afeder: :)
(8:21:39 PM) Avinash: OK. I have to take the back-seat in this item. http://setiquest.org/forum/topic/ata-cloud-conduit
(8:21:40 PM) gliese581: Title: ATA-to-cloud conduit | setiQuest (at setiquest.org)
(8:22:01 PM) MichaelM: kill the bot!
(8:22:19 PM) Jill: whose bot is it anyway?
(8:22:24 PM) afeder: welterde's
(8:22:26 PM) Avinash: No violence here!
(8:22:44 PM) Avinash: Don't we need it to record the session?
(8:22:50 PM) afeder: no
(8:22:54 PM) afeder: i copy paste it into the forum
(8:23:03 PM) MichaelM: oh it's okay
(8:23:09 PM) Avinash: Then, what is the purpose of the bot?
(8:23:22 PM) MichaelM: never mind
(8:23:22 PM) afeder: dont know .. but lets move on :)
(8:23:33 PM) Avinash: OK, back to conduit.
(8:23:47 PM) afeder: anyway, the item is really for robackrman and gerryharp: how do we move forward on conduit?
(8:24:48 PM) jrseti: We should discuss with Gerry. I assume the pipe will be very small at first.
(8:25:15 PM) afeder: yes, sigblips did the math: some 300 kb/s if continous
(8:25:29 PM) Jill: max will be 20 mbps for now, until we find $ to reshape the 1 gbps line
(8:26:00 PM) afeder: but thats fine to begin with
(8:26:24 PM) jrseti: I say get it working, then it may spur some money to make it full capacity
(8:26:34 PM) afeder: agree
(8:26:42 PM) afeder: question is: what do gerryharp and robackrman need from us? anything we can help with?
(8:27:51 PM) jrseti: I'll talk to them about it and see what they think. I brought up the idea to Gerry several weeks ago, but not in detail
(8:28:17 PM) welterde: the plan was to have it log this channel and publish it somewhere ;)
(8:28:32 PM) afeder: jrseti: we discussed it a little at last meeting but it was inconclusive
(8:28:42 PM) welterde: Jill: is it more hardware that is required for that or.. ?
(8:29:20 PM) afeder: my understanding is that it is cash for the ISP
(8:29:49 PM) jrseti: yes, ISP is expensive for this
(8:30:14 PM) Jill: more hours in the day. Gerry is now under pressure to complete tasks for contracts that are paying our bills before Jan 31 as opposed to April 15. i don't think hardware is an issue.
(8:30:38 PM) MichaelM: at this point, though, with almost nobody using it, cant we just serve "on demand"
(8:31:11 PM) jrseti: 20Mb/s is good enough to get started
(8:31:14 PM) afeder: yes
(8:31:56 PM) Jill: couldn't we serve to AWS and do various renderings there whether or not anyone looks at them as a start?
(8:31:59 PM) jrseti: What would be the best transport mechanism to AWS?
(8:32:12 PM) sigblips: ssh
(8:32:14 PM) afeder: lol
(8:32:28 PM) afeder: (chaos)
(8:32:50 PM) afeder: is gerry not present?
(8:32:51 PM) Avinash: We can start with a small stream. I don't think bw is an issue right now.
(8:33:13 PM) jrseti: agreed
(8:33:14 PM) Avinash: I think the issue is time for software and system setup.
(8:33:25 PM) jrseti: yes
(8:33:42 PM) afeder: but can we at least get some agreement on the interface?
(8:33:52 PM) afeder: the ATA to AWS interface
(8:33:55 PM) Avinash: Yes, we should.
(8:34:11 PM) afeder: because then we can develop the AWS part independently
(8:34:17 PM) jrseti: ideas?
(8:34:39 PM) Avinash: Who will define the API, and what is needed for the definition?
(8:34:45 PM) MichaelM: are we talking the data protocol, the phy, or what here?
(8:34:58 PM) afeder: we've discussed both one stream for data and one for metadata ... or one combined stream .. but never got decision
(8:35:13 PM) afeder: MichaelM: data protocol
(8:35:18 PM) MichaelM: kthanks
(8:35:38 PM) Avinash: If there is already a proposal, we can work with Rob and Gerry here.
(8:35:56 PM) afeder: Avinash: i think gerryharp and robackrman should define it because they/their hardware set the requirements
(8:36:01 PM) welterde: just plain udp containing the raw data(maybe with some authentication?) + meta-data channel(tcp this time)
(8:36:40 PM) afeder: yes something like that welterde
(8:37:00 PM) sigblips: UDP will drop packets without some form of error correction.
(8:37:19 PM) afeder: sigblips: is that an issue here?
(8:37:35 PM) MichaelM: my thought too
(8:37:39 PM) Jill: actually the easiest thing to swallow would be the multicast packets coming from the channelizer switch. subchannels from the DX would actually be better, but that might take more coding to isolate and resend.
(8:37:45 PM) sigblips: Is an error free data stream important?
(8:38:05 PM) MichaelM: error correction = less data
(8:38:16 PM) MichaelM: nyquist or something, right?
(8:38:22 PM) sigblips: Jill: yes that was my original plan.
(8:39:08 PM) Jill: as we commission SonATA we are living with certain dropped packet rate - tolerable, though we will return to try to reduce it once we are observing routinely.
(8:39:22 PM) sigblips: MichaelM: by error correction I mean a robust retransmit system like TCP or like what NFS can do with UDP.
(8:39:44 PM) MichaelM: got it
(8:40:39 PM) Jill: as i understand it, we are pushing multicast pretty hard, but Rob can correct me.
(8:40:48 PM) sigblips: Jill: You shouldn't have dropped multicast UDP packets on local switches in a closed network. When I said dropped packets I meant UDP on the Internet at large.
(8:41:01 PM) afeder: Jill: in what sense? pushing multicast hard?
(8:42:00 PM) Jill: i believe our usage is close to max bandwidths the switches will support.
(8:42:05 PM) afeder: ok
(8:42:47 PM) afeder: does anyone think UDP will be an issue for the internet link?
(8:43:57 PM) afeder: if not, maybe should just go on welterde's proposal for now and see if gerry and rob objects
(8:44:02 PM) welterde: if you are not too close to the maximum bandwidth of the uplink.. probably not
(8:44:23 PM) welterde: (and if the link is not broken)
(8:44:37 PM) sigblips: Has anyone else here written any code that uses UDP?
(8:44:42 PM) jrseti: I did some UDP over internet work years ago and the issue was not packet loss, but occasional packets arriving out of order.
(8:45:10 PM) jrseti: I've wriiten a lot of UDP code
(8:45:40 PM) afeder: jrseti: how much is occasional, approx?
(8:45:53 PM) welterde: same here
(8:46:25 PM) sigblips: Link quality important for UDP.
(8:46:28 PM) jrseti: less than 1%, it was small but was a problem.
(8:46:55 PM) afeder: jrseti: was this SETI related?
(8:46:56 PM) jrseti: If we have good link quality, maybe it will not be a problem
(8:47:10 PM) jrseti: It was not SETI related.
(8:47:38 PM) afeder: since we dont intend to decode the data it may not be as big a problem
(8:48:24 PM) jrseti: maybe not. I'd say we should try UDP with some diagnostic capability so we can tell if there are any of these types of problems.
(8:48:30 PM) afeder: alright
(8:48:31 PM) Avinash: jrseti: Is there a way to throttle the multicast packets? We want a small stream in the beginning.
(8:49:10 PM) jrseti: Yes, we could send limit it to a small set of channels.
(8:49:15 PM) welterde: (heh.. maybe I can use some resources of the university here, like storage or cpu time or so ;)
(8:50:21 PM) afeder: sigblips: did you have another proposal regarding how the apps in the cloud obtain the data?
(8:51:33 PM) sigblips: Yes, but this has transformed from something simple to something designed by committee.
(8:52:08 PM) afeder: but what was your pre-committee proposal in that respect?
(8:52:08 PM) Avinash: Can we then have a single proposal that people can comment on, and modify.
(8:54:05 PM) afeder: i dont see any way to do it without the 'catcher' that Gerry talked about, so if there are no competing suggestions, let's go with that?
(8:54:16 PM) sigblips: Whatever.
(8:54:54 PM) afeder: whats the problem though?
(8:56:38 PM) afeder: i'll assume that we'll go with the catcher idea then
(8:57:18 PM) jrseti: A single proposal as Avinash stated would help us go forward
(8:57:47 PM) jrseti: Who will write that proposal?
(8:58:19 PM) afeder: we've been discussing it between me, sigblips, gerryharp, robackrman, Mike Davis and Avinash
(8:58:49 PM) afeder: i'll be working on a proposal if sigblips is fine with it
(8:59:24 PM) jrseti: that would be great
(9:00:24 PM) afeder: okay - let's call it day?
(9:00:32 PM) Jill: on that positive note, let's quit.
(9:00:37 PM) jrseti: yes!
(9:00:52 PM) afeder: alright, see you next time
(9:00:54 PM) Jill left the room (quit: Quit: Page closed).
(9:00:54 PM) Avinash: Sounds good.
(9:00:55 PM) jrseti: bye
(9:01:05 PM) jrseti left the room (quit: Quit: Page closed).