spstarr: megari: 3am walking home bus stop .... skunk: walking down sidewalk unbeknownst to either of us
spstarr: I did see it before it was .. too late
yangman: what's really unfortuante is when someone runs a skunk over about a block from your house :(
spstarr: bridgman: we also had a skunk nest in my old house
yangman: everybody loses
spstarr: yangman: it's like killing kittens
yangman: that was an unpleasant 2 weeks
spstarr: the skunk can't really do much else other than spraying you
yangman: I'm sure it was unintentional
spstarr: what I dont know is if the skunk that was approaching had it's scent bag cut off or not
yangman: but, being a busy road artery, the little guy got flattened pretty, well, flat
yangman: a bunch of days of skunk smell followed by a bunch of days of decomposing corpse smell
spstarr: threatened skunks will go through an elaborate routine of hisses, foot stamping, and tail-high threat postures before resorting to the spray.
yangman: yeah, I've seen many, but never had one feel threatened by me
yangman: they just kinda look this way, look for a bit, then waddle off
bridgman: spstarr; your porcupine, sir -- http://pages.interlog.com/~johnb/ht/DSCN2940.jpg
spstarr: looks at your url
spstarr: people have these as pets...
spstarr: bridgman: the carrot trick worked?
bridgman: that was taken last year by the front door; I stacked up some boxes in the morning on the way to work and accidentally blocked this guy in
spstarr: would not wanna piss off that lil fella
bridgman: he sat there quitely all day and was still there in the evening
bridgman: I moved one box, closed the door, turned out the lights and he waddled off
yangman: you just gotta avoid falling on 'em ;)
bridgman: looks like he didn't eat the carrot; no idea what they eat actually, but most things his size eat carrots
bridgman: or cats
agd5f: benh: does machine_is_compatible() work for macs with "detected as" /proc/cpuinfo? stuff like the iBook, iSight, eMac?
bridgman: apparently they climb trees, never would have believed that
spstarr: wonders what the fascination of pets with skunks
benh: agd5f: it takes the exact same strings as the "machine" line in CPU info
spstarr: apparently they're smart too
spstarr: oh they're like ferrets.. my friend had 2 ferrets
spstarr: and those things were testy sometimes, and the baby ones always tried to bite me, my clothes etc
spstarr: or try to go up my leg!
agd5f: benh: ok. based on the xorg code, some of these systems have "detected as" rather than "machine"
MostAwesomeDude: Ferrets are awesome in every way, except that they smell really bad.
soreau: So not in every way ..
MostAwesomeDude: Well, the exception proves the rule.
MostAwesomeDude: I prefer cats, but ferrets are still pretty awesome.
bridgman: &^%#$^%&@!! - so I hear a noise in front of the house; Mr. Porcupine is up on his hind legs eating the wood trim off the corner of my house
bridgman: there's a big chunk missing
agd5f: bridgman: maybe it's a beaver :)
bridgman: I'll bring you the carcass, you can decide ;)
bridgman: apparently they eat tree bark in the winter
bridgman: dude, it's summer, 90F today !!
bridgman: eat the damn carrot !
soreau: bridgman: Raccoons? Porcupines? I've found one more reason not to visit .ca!
agd5f: bridgman: put hot sauce the wood trim
bridgman: it's OK, we have guns too
soreau: Actually, I've already been and it is a nice place
soreau: I'd consider it a fine place for residing
bridgman: it is nice
bridgman: except for winter, of course
MostAwesomeDude: or.us is similar in raccoon counts
MostAwesomeDude: And winter.
soreau: agd5f: LoL
MostAwesomeDude: Although bridgman's winter is probably a tad harsher.
bridgman: yeah, we seem to have more extremes both ways
bridgman: sigh... I miss the ocean
bridgman: maybe the black flies will eat the porcupine
spstarr: bridgman: lol!
bridgman: or the porcupine will eat the black flies, guess I win either way
spstarr: bridgman: any racoons? My mom told me one of them was sitting on our deck... smart fellas.
bridgman: agd5f; there's a *lot* of wood trim; I'd need a 55gal drum of hot sauce or something
spstarr: bridgman: the animals want in
MostAwesomeDude: Dude, just imagine how big the taco would be for that drum of hot sauce.
bridgman: spstarr; last week I went out at night, racoon was in the garbage can; I turned on the lights and he came out of the tipped over can
MostAwesomeDude: Now I really want tacos.
bridgman: ran back in, grabbed a chicken leg and scurried up one of the posts holding up the carport
agd5f: I had tacos for dinner
spstarr: bridgman: they are brash but ive never seen one up close
bridgman: went out on a beam, sat there looking at me, munching on the chicken leg
MostAwesomeDude: Dang it, now I *really really* want tacos.
bridgman: pissed me off something fierce
spstarr: bridgman: they did a study on racoons and they are pretty smart too
MostAwesomeDude: bridgman: They're crafty little bastards, ain't they? :3
bridgman: yep, he even got the bungee cord off the garbage can
bridgman: I can barely do that
spstarr: especially the city racoons, they learn well
bridgman: I'm sure somewhere in Toronto a racoon is ordering pizza with a stolen credit card number
spstarr: that's a cartoon =)
bridgman: the animals want in all winter, but I'm pretty well set up to repel boarders
bridgman: in the summer they should want *out*
spstarr: I did have an interesting encounter with a tree rat (squirl)
spstarr: and with crows, i can have a stairdown with.. they dont fear a skiny thing like me :(
spstarr: i did once
bridgman: speaking of repelling boarders, I assume everyone has watched this video ?
spstarr: bridgman: they want in in summer cause it's nice and cool :)
spstarr: it tells you when the trap is 'full'
spstarr: lots of mice...dead
bridgman: there's another one that will email you when the trap needs emptying
bridgman: one controller, up to 24 wireless electronic traps
spstarr: maybe it has SNMP support?
spstarr: snmpget oid.trapName.mouseDeadCount
spstarr: im still waiting for a fridge to have SNMP capabilities
spstarr: to send me snmp traps if door is left open etc
agd5f: spstarr: build one your self
bridgman: I think the LG NetFridge can do that
spstarr: agd5f: a door sensor would be possible
bridgman: built in PC, web browser, somehow keeps track of what you take out (UPC scan ?) and builds shopping lists for you
spstarr: that's nifty
agd5f: rfid would be easy
spstarr: then press the 'Order Now' button to have it rushed delivered to your place from grocerygateway.com :)
bridgman: agd5f; you'd keep getting the rfid tags stuck in your teeth
bridgman: hmm, maybe they dropped it; I can only find a fridge with built-in HDTV
agd5f: price of convenience
spstarr: bridgman: so what is AMD's position on switchable graphic capability for the r6xx+ ?
spstarr: bridgman: if we exclude X for simplicity is it doable with drm driver?
MostAwesomeDude: spstarr: I would imagine it's something along the lines of "It's possible, but really really tough."
spstarr: MostAwesomeDude: with drm+kms it might make that easier.. again not including X
agd5f: I think it also tend to be an oem value add, so standardization is lacking
spstarr: but didn't AMD and Intel (at least for Lenovo laptops) agree to something
spstarr: I saw that on amd site
spstarr: im in the GMA card right now the r6xx worked with EXA.. I wasn't even sure this came with switchable as it wasn't documented
spstarr: agd5f: im assuming the vista driver does some sort of GPU wake POST/shutdown FINI?
spstarr: and there is some logic somewhere to actually switch the video line output from one GPU to the other
spstarr: but this wouldn't be documented in the r6xx programming guide, at least If i look at it i dont know
spstarr: "AMD has written an intermediate driver which is always present, and through which the "real" AMD and Intel drivers communicate with the operating system"
spstarr: so we'd need another kernel driver module
spstarr: it might take reverse engineering Lenovo's power manager
spstarr: to see what actually it does
bridgman: the main issue with switchable graphics is that the work involved collaboration between multiple companies
bridgman: we each own our portions AFAIK, but you kinda need all the info to write drivers
spstarr: one person from Lenovo on their blog doesn't want this 'IP' to be free
spstarr: of course a lot of negativity came out of that in the comments
spstarr: do you think it will become more commonplace?
spstarr: switchable graphics
yangman: I certainly hope not
bridgman: there's not a lot of complicated programming info, it'll just be kind of a pain to get everyone nodding their head in the same direction at the same time
bridgman: and I already have too much on my "things to do" list ;)
bridgman: can someone point me to the Lenovo blog ?
spstarr: bridgman: so basically reverse engineering from vista would be the only way right now "Chinese Firewall" approach
spstarr: i believe you can sniff calls
spstarr: watch registers dump from one GPU capture that examine it.. figure out what they are.. then do the same for the other GPU dump registers.... then figure out the glue
spstarr: im guessing before it switches this 'intermediate' driver from AMD controls which GPU to turn on/off and the Lenovo PM tool just calls this driver w/o going into AMD or Intel's register programming
spstarr: since Lenovo would not want to deal with sending packets to the GPU to initiate it to go into a deep sleep or power off
bridgman: I suspect we'll be able to get agreement to release the programming info before the infrastructure in X has been figured out
bridgman: this probably all gets easy once we have KMS and Gallium3D
spstarr: that would be nice, when the time comes, I want to learn how from drm and gallium 3d drivers
spstarr: if there's comments it will make understanding whats going on much easier
bridgman: 'cause then it's a lot easier to write an X driver which only has to worry about the two-GPU stuff and not about all that yukky hardware programming
spstarr: im guessing modifying X to do this isn't going to be easy, but wouldn't KMS + the slimmer DDX make this more doable?
bridgman: exactly; if possible, you want to hide this from X completely
agd5f: looks to me from that thinkpad support link you posted that when the discrete chip is active both devices are active, so it's probably just a matter or turnign on/off the discrete card
spstarr: agd5f: if you enable in bios
agd5f: maybe some gpio magic for the display lines
spstarr: right now i have the intel one enabled only, can't have both or X goes wild
spstarr: doesn't know whos 'active'
agd5f: spstarr: does ti work if you specify a busid and driver in your xorg.conf?
spstarr: agd5f: the GPU would never be totally off but like a D0 state? (nearly off)
spstarr: agd5f: I can try
spstarr: agd5f: that might explicitly tell X which one to use
spstarr: agd5f: will try later today (2am right now)
bridgman: it's 2am in agd5f-land too ;)
agd5f: might be D3 cold, or maybe the bios twiddles some pci bits to disable the port
spstarr: D0 is fully powered on right?
spstarr: mixes up ACPI sleep states ;p
agd5f: as I recall
bridgman: yeah, small numbers = more alert
bridgman: d3 cold is like dead
bridgman: of course none of the s and d numbers entirely line up
bridgman: d3 is more of a coma
bridgman: it was not a happy day many years ago when we discovered there was more than one d3
bridgman: many years ago
agd5f: d3 hot and cold IIRC
bridgman: think of the "Crucifxion ??" line from LIfe of Brian
spstarr: agd5f: the intermediate driver is making me curious
spstarr: we might not need one in Linux if we just expose if the GPU is switchable in sysfs somewhere
bridgman: spstarr; something has to understand the concept of "two GPUs, one display"
spstarr: then PolicyKit or some other authentication could grant a user to tell kernel to switch
bridgman: which is wierd
spstarr: bridgman: but they're both not 'on'
bridgman: the question is whether X needs to be forcibly educated about it, or whether it should be quietly hidden in a drive
spstarr: er on at the same time
bridgman: right, but X has this crazy idea that if you have a GPU then you probably have a display
bridgman: and I don't think X understands the concept of hot-plugging GPUs, does it ?
agd5f: I think you could do it with a shadowfb hack
bridgman: maybe something for GPU objects
spstarr: so if it sees 2 GPUs it thinks 2 displays?
bridgman: I think so
bridgman: I guess hot-plugged GPUs would be a nice generic solution, and would also handle external GPU boxes via PCIE cable
agd5f: actually xinerama might do the trick since it muxes all drawing to each framebuffer
bridgman: but the transition from one GPU to t'other would still be tricky, because there's an upper layer which wants to be blissfully ignorant of the GPU switch
spstarr: seems logical, we have hot plugging capability in linux I donno its API though for CPUs, PCI hot slots
bridgman: agd5f; that's interesting
agd5f: set up both cards with xinerama, turn off the displays on the one you don't want active
bridgman: you need something that fools upper layers into thinking the display is still there, even though the middle part of the app/server/GPU/display chain is being replaced
spstarr: wouldn't XrandR enherit this functionality somewhere?
agd5f: have some fake randr output that's always connected
bridgman: we're getting off topic here, isn't this #porcupine ?
bridgman: damn, wrong channel
bridgman: carry on
spstarr: agd5f: XRandR sees that I have LVDS1
bridgman: that's the key; fake
spstarr: oddly it shows '1'
spstarr: I can see if flipping to AMD it shows LVDS0
agd5f: spstarr: intel might start their output numbering at 1
spstarr: oh this is intel's view of what it sees?
agd5f: it's driver specific
bridgman: the great thing about standards is that there are so many to choose from
agd5f: spstarr: you could call the output "bob" if you wanted
Zajec: bridgman: could you think about getting PM docs from AMD? like how to disable unused chipsets and how to measure current load of GPU?
spstarr: so I see such a project like this requring multiple things needed - a big project - for future
spstarr: agd5f: if this remains just a niche, there's no point spending time on it
bridgman: Zajec; the power stuff is third party chips; AFAIK we own the info related to how the chips are hooked up, and I think the programming info for the power chips themselves is already public
spstarr: but AMD and Intel both worked on this so it might go somewhere more
bridgman: spstarr; think of it as a big niche
bridgman: it probably won't be forever but it fills a useful need today
bridgman: and probably will for a while
Zajec: bridgman: oh, good to know, thanks
spstarr: well when GPUS are in CPUs....
spstarr: you dont need 2 GPUs really
spstarr: and the GPUCPU would have Power management capabilities
spstarr: unless the GPUCPU is 'slower' than a discrete GPU
spstarr: or we might end up with GPUCPU and a discrete GPU mode with switchable graphics ;p
spstarr: fun times
bridgman: I can't really comment much here, sorry
spstarr: i know :)
spstarr: bridgman: all will be revealed when whatever is developed, tested, given to vendors for further testing, then announced at a trade show :)
bridgman: pretty much; or when it shows up in the Inquirer or Fudzilla ;(
bridgman: am I the only one who finds it disturbing that there were 3D video games 15 years ago ?
spstarr: in OpenGL?
spstarr: bridgman: software rendered? :)
agd5f: quake was '95 or '96 as well
spstarr: in 1995 i was cursing my Trident SVGA video card
spstarr: I couldn't muster much out of it
hax0r1337: I was playing sega genesis :)
spstarr: oh wait!
spstarr: it was not that was with my Pentium 200 (then upgraded to 233MMX) which had the ATI 3D XPression+ (with daughterboard) card
spstarr: the Tridant was with my 386 DX33
spstarr: I still have the XPression+ i my bag-o-video cards
bridgman: yeah, I seem to remember early-mid 90s we splurged and bought a 486 for doing Xilinx autorouting
bridgman: seems like there should have been steam engines or something
spstarr: this was a 3D RAGE II (Mach64 GT)
bridgman: yeah, we had an ATI card in the 486 I think
bridgman: I'd never really heard of ATI then ;)
spstarr: I didn't know ATI made modems too
spstarr: I found one of them at my previous job
bridgman: yeah, that was a bad time to get into modems
spstarr: it surprised me
spstarr: bridgman: cold front is though, byebye 30C
spstarr: nice a comfortable today
bridgman: I'll probably still put the soft-top on the Jeep though...
bridgman: anyways, the little hand is getting awfully close to "3", time for zzz
spstarr: :-) nite bridgman
spstarr: time for me to go D3 state soon
rah: System Events
rah: May 21 21:51:51 myrtle kernel: [drm:radeon_cp_indirect] *ERROR* sending pending buffer 13
rah: May 21 21:52:07 myrtle kernel: [drm:radeon_cp_indirect] *ERROR* sending pending buffer 8
rah: (with drm r6xx-r7xx-support)
glisse: dileX: lastest kernel should fix rendering glitch you have
dileX: glisse: thx, I will check. report later.
nanonyme: MostAwesomeDude: Well, can't promise to be useful but I'm interested in the matter in any case.
dileX: glisse: screen-corruption is gone (radeon: configure subpixel for 1/12 precision on r5xx hw)
dileX: glisse: irq-problematic remain
dileX: glisse: here kernel based on drm-next-radeon (commit 9bf2b46) and linus-tree on top (w/ fix-drm_helper_initial_config-to-satisfy-linux-2.6.30-rc3.patch)
hifi: what screen corruption exactly?
hifi: flash/xrandr corruption?
hifi: looks familiar
hifi: though I run the stable stuff
dileX: problem was ddx is was on 1/12 subpixel precision and kernel not
dileX: hmm, zhasha is not around
glisse: dileX: what is the irq problem already ?
dileX: glisse: still the same as reported. while playing flash-music video in firefox. radeon kernel-module and sound share the same irq.
dileX: glisse: audio-dropout problem
dileX: problem seems not to be ffx/flash related, playing a local-stored flash-movie w/ vlc has audio-dropouts, too
glisse: dileX: your sound card is intel hda ?
glisse: i will take a look at audio driver even if we emit lot of irq it should behave properly
dileX: glisse: output of alsa-info.sh
dileX: glisse: where is the irq-related stuff handled for your work (source-code)? kernel? ddx?
nanonyme: MostAwesomeDude: Meaning if I have spare time, I'll study the matter a bit. :)
nanonyme: MostAwesomeDude: Mostly the tricky part is - as far as I've understood - that the video decoder is essentially separate from the GPU as far as I've read so the documentation given might not cover it.
glisse: dileX: irq is in kernel always
dileX: glisse: for kms this is in radeon_irq_kms.c: DRM_INFO("radeon: irq initialized.\n");?
glisse: dileX: but there is no bug there
glisse: irq code is fine
glisse: the only diff is that now we emit thousand of them per second
glisse: while with dri1 there is only few of them
glisse: not matter the sound driver should behave properly
max_r: has anyone any succes with running redbook hello on r6xx-rewrite?
glisse: i will see if i have such audio card and try
dileX: glisse: hehe. I am trying to understand the correlation (not blaming). the problem could caused by hda_intel.c
glisse: dileX: you should look at hda_intel.c and what it does on irq
glisse: you won't see anythings interesting in radeon irq code
glisse: thought i will review it once again latter
dileX: glisse: do you remember that I told you that the radeon kernel-module give a
dileX: dri2: 16: 286192 0 IO-APIC-fasteoi HDA Intel,
glisse: problem is likely in radeon_drv.c
dileX: (beyond audio-troubles)
glisse: but this isn't big deal
glisse: my todo list have way too many things for me to care for such things
nanonyme: glisse: Got your todo list somewhere on the Net available so all of us end users could share guilt for you having too much stuff to do? =^_^=
glisse: nanonyme: it's in my head and i got problem redirecting my brain to a file on linux system ;d
nanonyme: Keep writing all the new tasks down so you'll eventually solve the old ones and you'll have all todo items written down?
agd5f: MrCooper: BTW, I added preliminary support for mac cards to glisse' kms drm tree
MrCooper: agd5f: wow cool, sounds like it's time for me to try it :) I'm afraid I won't get to it before the week after next one though
agd5f: MrCooper: no worries, just thought I'd mention it
max_r: agd5f: what was purpose of your last two commits r6xx-rewrite? Does it mean that it somehow works for you?
MrCooper: I appreciate it
agd5f: max_r: first one fixed a bug in the command submission for r6xx
agd5f: the second fixed a segfault when setting up vertex shaders with only pos
max_r: agd5f: and you found it by looking at source code or actually running it?
agd5f: max_r: both
max_r: agd5f: and does it work for you?
agd5f: hello works as it it doesn't crash, but doesn't always render right
max_r: for me it fails with glut error
max_r: what do I need to test it?
stikonas: agd5f: do you know about poor Textuded Video performance (with newttm KMS) when video is scaled?
agd5f: stikonas: no
stikonas: and I think it only occurs when video is scaled up
hifi: have anyone tried running team fortress 2 with r500 and wine? :p
stikonas: because HD movied play more or less OK
max_r: agd5f: are you using radeon or radeonhd? what x.org server version? or it doesn't matter at all?
agd5f: max_r: my r6xx-r7xx-3d drm branch and he r6xx-rewrite branch of mesa
agd5f: max_r: either radeon or radeonhd works
agd5f: xserver shouldn't matter
max_r: agd5f: what card? fails here on hd3650
agd5f: I've been testing on rv630 and rv730
max_r: freeglut (./hello): ERROR: Internal error
max_r: hd3650 is like rv635?
max_r: so it should work?
max_r: I am using drm from your branch and r6xx-rewrite from mesa...
agd5f: max_r: looks like a problem with hello
max_r: glxinfo crashes
max_r: after listing OpenGL extensions
agd5f: yeah, I know
max_r: it does same thing for you?
agd5f: it's segfaulting when attempting to destroy the ctx private
max_r: yeah: http://pastebin.ca/1430982
max_r: and what can I do about glut problem?
agd5f: you can work around it by checking if the pointer is null, but I'm not sure why it's null in the first place
agd5f: max_r: if you are using radeon make sure you have http://cgit.freedesktop.org/xorg/driver/xf86-video-ati/commit/?id=2888dd9ae9689b1cd72115dc0ceea1f5957299b0
nanonyme: wonders if valgrind would catch it
max_r: what pointer? I 'fixed' problem with glxinfo by commenting out r600DestroyContext call in radeonDestroyContext, it makes glxinfo go to the end but redbook hello fails with same error
max_r: agd5f: yes, I am using latest xf86-video-ati git
agd5f: max_r: the context private pointer that r600DestroyContext dereferences
agd5f: max_r: I'm not sure about the glut error. I'm not that familair with glut
max_r: agd5f: it starts to fail another function
max_r: workaround is to add "return;" to the beginnign of radeonDestroyContext
agd5f: max_r: also, at this point the only things worth trying are the simple redbook demos without textures
max_r: agd5f: and I am trying exactly simple redbook demo
max_r: agd5f: it fails with glut error
agd5f: sounds like the visuals or fbconfigs aren't set up right or something. I dunno
max_r: I am compiling mesa/progs/rebook/hello.c is it correct place for "redbook hello"?
glisse: agd5f: do you remeber how cliprect in dri1 worlds are supposed to work ? ie does driupdatedrawable should update the cliprect thanks to information from xserver
glisse: or should i get that from the sarea
agd5f: glisse: I think the sarea, but I don't remember for sure
agd5f: glisse: looks like we don't have a wait for engine idle hook yet in the drm
glisse: agd5f: we don't need it
glisse: at least not in the new world :)
glisse: with memory manager if you want to wait for gpu to be done you can wait on bo idle
agd5f: what about for stuff like changing clocks or modes?
glisse: oh for that we first to have a big read/write lock in any path which might hit gpu i think putting it in all ioctl should be enought and then we effectively need somethings to wait for idle but i think the best is to schedule clock change through the ring
glisse: i did that and it worked
glisse: it's also lot easier as doesn't need for anybig lock
glisse: on r5xx
glisse: never tested that on r3xx or before
agd5f: I don't think you should change the engine or memory clock via the ring
agd5f: you need delays and readbacks, etc.
glisse: i didn't tried changing memory clock... didn't though to that one
glisse: gpu clock was fine
agd5f: I still don't think it's a good idea
glisse: yeah for memory clock using the ring would be problematic if number of reg need to written is > 63
mjg59: Memory clock is awkward
mjg59: Doing it via atom takes about 45msec
mjg59: Experimentally, it can be done in much less
mjg59: glisse: I've got a hack for this stuff at http://www.codon.org.uk/~mjg59/tmp/drm-radeon-pm.patch
mjg59: But it does do locking
mjg59: If you want to do memory reclock you either need to do it in vblank and stall all other instructions, or you need to disable the outputs
agd5f: plus doing it via the ring means we have to decode all the bits in the atom functions and for all the various asic rather than just calling the atom scripts
agd5f: in some cases you have to read back a regsiter to make sure the pll locked, etc.
glisse: mjg59: did you know what was taking so long with atom ?
mjg59: glisse: Yes, it reinitialises the entire memory controller and then sleeps for a while
mjg59: agd5f: The power savings from memory reclocking are significant, so there's strong incentive to be able to do it on the fly
glisse: agd5f: can you find out if reprogramming the whole mc is really needed ?
glisse: i suspect the windows driver does reclocking without that
glisse: same for osx
glisse: but osx only know a small subset of asic
mjg59: glisse: What I ended up with was doing most of the setup in the drm and then just calling atom for the final step
mjg59: Which was fast enough that I could do it in vblank
agd5f: depends how far you change the clocks
mjg59: In testing on my X1900, it seemed entirely stable to flick between full speed and 100MHz with that code
max_r: agd5f: are there something like LIBGL_DEBUG for glut? or any other way to find out what's wrong with visuals?
mjg59: Though 100MHz causes problems on some systems
glisse: mjg59: what is the final step in atom ? :)
agd5f: max_r: I dunno
mjg59: glisse: Did you look at that patch? :)
glisse: no i am in console without browser
mjg59: The mark of a true X hacker
glisse: working on x you often endup doing that :)
glisse: i going to wget
mjg59: I setup the memory controller by hand
mjg59: And then call the atom code to initialise it
mjg59: r600 /seemed/ like it might be fast enough to do entirely via atom
mjg59: But I didn't have any working IRQ code to test that
mjg59: glisse: The basic idea was to clock up on any submitted instruction and push out a timer, then clock down whenever the timer expired
glisse: and atom is taking big amount of time doing only this few reg write...
mjg59: glisse: It sleeps
mjg59: atom tears down the MC, sleeps and then does the bringup
mjg59: It doesn't /seem/ necessary to do the teardown
glisse: yeah i don't think teardown is necessary
glisse: maybe there is a finer grained atomfunction just to change the mc without tearing it down
mjg59: glisse: Tracing the atom calls, it doesn't seem so
glisse: btw the timeout reclocking is definitly what we should do if we want to be agressive on power consumption :)
mjg59: The initialisation is a separate atom table, but the teardown/programming are in the same table
mjg59: Which then calls the initialisation code
mjg59: The problem then is that it's somewhat asic dependent
mjg59: But r530-
glisse: mjg59: on rv515 mc reg are somewhere else iirc
mjg59: glisse: Yeah, but I'd have expected the atom code to work anyway
glisse: oh i thought the lockup was when doing the small hardcoded reg bitbang
agd5f: the chips also have some hw to adjust clocks automatically (controlled by the asic)
mjg59: agd5f: I don't think that touches the memory clocks?
glisse: but i think clock gating only operate in small range
mjg59: Adding memory reclocking saves me ~5W
agd5f: generally. changing the memory clock is much trickier
mjg59: glisse: Anyway, if you'd like to pick that up that would be great
mjg59: I've only got a limited amount of hardware (one R500, one R600)
glisse: mjg59: i will definitly look to pm once i got userspace bug sorted out :)
mjg59: glisse: Awesome
glisse: next on my todo list is also to get userspace fast again when kms is enabled
glisse: then powermanagement :)
glisse: well i might do powermanagement sooner if i get a laptop ;)
nanonyme: glisse: Best motivator for opensource developers who aren't getting paid "well, *I* want this feature". :)
Maestro123: 2.6.30,video 4830 glxgears - 350-400fps , with xrendr effects kwin - 150-200 fps. is it normal ?
stikonas: Maestro123: you are using software rendering, so fps only show CPU and memory speed, and such results can be expected, but glxgears is not a benchmark.
kdekorte: Maestro123, yeah that is software rendering, there is no acceleration for r6xx and higher chips at the moment
kdekorte: to be clear... 3d acceleration, 2d acceleration is working as well as XV
Maestro123: IF working XV - CPU load > 30-40%
Maestro123: Read news about the experimental 3D acceleration, decided to try the open driver! probably still too early =)
stikonas: Maestro123: you need another branch of mesa and drm
stikonas: in order to have 3D, and it is still a bit too early, wait a month or two
Maestro123: DRM in kernel-2.6.30-r6. it does not fit? Or still need to install the DRM from git?
max_r: Maestro123: you need drm from another branch: anongit.freedesktop.org/~agd5f/drm branch r6xx-r7xx-3d
Zajec: what does mean coherent?
Maestro123: This instruction will come?
osiris__: agd5f: does r300,r400 or r500 support 64bit float vertex attributes?
King_InuYasha: stupid PuTTY
glisse: osiris__: i dont think so
glisse: i think only r7xx does but maybe r6xx
agd5f: osiris__: nope
agd5f: rv670 and r7xx IIRC
erjc: what about rv350 and so?
agd5f: erjc: nope. same as the rest of the r3xx-r5xx chips
MostAwesomeDude: Why would you need double vert attribs?
MostAwesomeDude: *double float, even.
erjc: but I should pull from same branches, yes?
agd5f: erjc: if you want to test the new 3D bits, pull from radeon-rewrite branch of mesa
glisse: osiris__: do you remember if rewrite ever worked properly regarding cliprect ie when an X app is above a gl app ?
erjc: thx agd5f
osiris__: glisse: I think it has never worked
glisse: dam i hate dri1
osiris__: MostAwesomeDude: EXT_vertex_array (which is part of gl1.1) allows for double precision vertex attributes
MostAwesomeDude: osiris__: Part of, or based on?
nha: but does anybody really use them?
glisse: does the extension enforce implementation to keep precision ?
MostAwesomeDude: I mean, if it's mandatory, we should just let Mesa cast them.
nha: EXT_v_a is just glVertexPointer and friends, right?
osiris__: nha: yes
nha: Mesa should already be doing that for us, no?
osiris__: nha: currently mesa maps all formats to float
MostAwesomeDude: Well, we can add a PIPE_CAP for Gallium, but for Mesa let's leave it as-is.
nha: yeah, just leave the Mesa-casts-stuff codepath for double
nha: it'll be slow, but anybody who uses double floats in a vertex array deserves to lose anyway
osiris__: nha: bytes, shorts and ints are cast to float too
MostAwesomeDude: I, for one, don't feel like trying to do those optimizations in classic Mesa.
agd5f: IIRC that's why ut2004 is so slow
osiris__: hmm, maybe I will implement it today
nha: osiris__: anything that the hardware can do, it would be nice to try to support as well
agd5f: I think it used shorts or bytes
nha: if only to gain the experience in how to do it
osiris__: yeah, unfortunately vertex attrs need to be dword aligned, so I user tries to feed the card with 3 byte vertex color, we would need to realign every attrib to dword
nha: osiris__: true, but applications developers are already told not to do that by everybody
glisse: osiris__: don't try to cope with all stupid error programer can do
nha: once we are more serious about performance, it would be interesting to add a "performance debugging mode" to the driver, where we print out warnings to the console when programmers do something stupid that forces us to use some slow conversions
glisse: just try to optimize what everybody use and what everybody should use
osiris__: glisse: yeah, but we need to be able to handle all formats, right?
glisse: osiris__: mesa will do the conv for you
nha: we need to be able to handle all of them, but not necessarily speedily
osiris__: glisse: no, it won't. I need to bypass mesa conversion to use vertex attributes in bytes/shorts
nha: maybe you can still reuse some Mesa functions on a lower level, though
glisse: osiris__: you could ask mesa to do conv on case you don't handle
osiris__: I'll probably just copy the necessary code
osiris__: glisse: mesa doesn't have such a functionality
uzytkownik: Hello. How to understand 'TODO/3.0' on http://www.x.org/wiki/RadeonFeature?
glisse: osiris__: hhhmm strang
chithead: uzytkownik: it is explained to the left
uzytkownik: chithead: OPS. Thanks.
Curtman: Sorry for the hardware question, but is there a DVI to component adapter that's different from the SVIDEO to component adapter that comes with some cards?
Curtman: We have tried the output of a DVD player and the display looks great, but anything we display on the ATI card looks terrible.
glisse: Curtman: iirc dvi port doesn't have a composite output so you won't find any dvi to component without huge electronics to convert the signal
agd5f: Curtman: IIRC, ATI used to make a DVI-I to component adapter that used the TV dac from the analog part of the dvi port, but it's not supported by the open source drivers
Curtman: agd5f, I see.. So you don't know of any way to get readable text through component on a newer card?
agd5f: Curtman: component might work with vesa or text console
zhick: wow this is weird
zhick: is anyone else here experiencing xdamage problems (windows not drawn correctly/completely) with dri2/kms and kde4 with enabled desktop-effects?
stikonas: zhick: you need newer xserver
stikonas: it is a known problem
zhick: i'm using git master :>
zhick: pulled a few hours ago
zhick: it doesn't get much newer :D
stikonas: zhick: fix only got in about a week ago,
nanonyme: stikonas: So how new is supposed to be new enough?
nanonyme: Maybe a regression then?
zhick: anyway, what i wanted to say: after playing around a bit with glxgears and some kwin effects (show desktops and show windows) it was suddenly gone oO ...
zhick: but back again after disabling and enabling effects ... and wasnt able to reprocuce it so far. : /
zhick: has this realy been fixed in xserver? so maybe a git-bisect would help to find the regression which reintroduced this again? :>
nanonyme: Would probably be helpful if you could pinpoint the commit which stikonas says should fix it. :)
stikonas: nanonyme, zhick: ^^
nanonyme: Yay. :)
zhick: ok, ill mark that as good and current as bad and start bisecting then :>
nanonyme: would first check if it's fixed with that commit
nanonyme: Before bisecting.
zhick: ok, that's probably a good idea :p
zhick: is there a git commad to revert all commits back to a certain one? i'm not a git guru, barely know about pull and bisect
ajax: git reset --hard
zhick: ajax: ty :)
rektide: i'm interested to know if its possible to shut down my video card (4870x2) while my system is running?
rektide: talking out my bum, but turn off x, and send it acpi signal to tell it to suspend?
rektide: i use the computer infrequently, and i'd very much like ot save the 70w of power it uses idling
mjt: most ati cards do have power management
rektide: afaik it involves idling down, but the draw is still significant
mjt: and it's one of the frequent questions
rektide: that was your post about a cursor blinking causing a 10w power drain, right? while back now, i may be mis-attributing.
mjt: no, not mine
mjt: but i remember it ;)
rektide: well, i know power consumption doesnt go un-noticed.
mjt: was on lwn a while back
mjt: in short: thist topic is interesting, many people concerned about it, but it seems it's not on the highest priority
rektide: given how many people i know who have their system on 24/7 and use it 4-5 hours a day, suspending the video card altogether seems like it'd be the True Way To Go.
rektide: save the planet man
mjt: even basic power management is lacking currently, as far as i can se
mjt: i too am very interesting: to turn off the onboard 780g on a headless server :)
rektide: i know the road is extremely long & there are priorities (like getting the driver working)
mjt: ..which is the hottest component of the whole system
rektide: no doubt. :)
mjt: there IS doubt usually
mjt: because the CPU is usually more hot than this
rektide: my 4870x2 is just south of my northbridge, so unless i ramp up the fans it idles at 80C and frequently causes the northbridge to overheat & die.
mjt: but mine is undervolted 45W and draws less than the northbridge
rektide: northbridge + gpu in your case
rektide: i'm curious to hear what exactly turning off a video card would mean
mjt: but speaking of turning it off (even discrete card).. well, for some reason i don't think it's possible
rektide: well, acpi can tell components to "suspend"
mjt: but i'm not a hardware guru ;)
rektide: that was my one thought
rektide: i dont know what a suspended video card does or what that means for power draw
rektide: but it was the first hting i thought of
rektide: i dont know whether its possible to suspend just a component while hte rest of the system is active
agd5f: you can switch teh card to d3 cold
rektide: and you can do that for one particular card ?
rektide: right, device state, not system state
rektide: i dont really know how the whole constellation of ACPI works together
rektide: thats really what i'm looking for; d3'ing the card
rektide: any ideas on how i'd get it back when i want it again latter?
agd5f: d states are pci, acpi has it's own stuff
agd5f: switch to d0
rektide: then just start x again
agd5f: you'd probably need to re-initialize the card using atombios
rektide: any suggestions for how i'd set the d state in the first place? is there a generic tool for that, or would i need osmething for radeon in particular?
agd5f: in the kernel you can use the pci layer to change the state via pci config registers
agd5f: not sure how much device specific stuff is needed
mjt: and if i do that with the onboard 780g graphics?
mjt: probably not implemented in hw
rektide: its pretty basic ACPI stuff; i'd be surprised if it didnt work
agd5f: d states are pci, not acpi
agd5f: although I guess acpi uses similar nomencleture
zhick: hm... kwin effects are still broken for me
zhick: just reseted to http://cgit.freedesktop.org/xorg/xserver/commit/?id=f250eea2e90fc50bec5214c2f41132b95edc2c46
stikonas: zhick: maybe you also need newer DDX driver, or newer mesa
zhick: stikonas: ddx is from glisses radeon-gem-cs3 branch and mesa is radeon-rewrite... both only a few hours old
glisse: zhick: is xserver from git too ?
zhick: gisse: yep
zhick: zhick tried witch current master and tried with reseting to f250eea2e90fc50bec5214c2f41132b95edc2c46
zhick: so it's no regression...
zhick: it was right for a few seconds (windows were drawn completely etc) after playing around a bit with glxgears and desktop switching... but it was broken again after in disabled and enabled the effects again, and i wasn't able to reproduce it yet.
NForce25: helllooo...anyone using xf86-video-ati on rs690?
MostAwesomeDude: NForce25: Ask your question. :3
NForce25: i was using xf86-video-ati with rs690, had awful artefacts with opengl games and awful low performance, and thought it's supposed to be like that
NForce25: but yesterday i tried it with r400 x800. And performance was at catalyst level, and there were no artefacts
mjr: yes well, that's because support for different chip generations is at different levels
nanonyme: That's quite impressive, I'm not sure if anyone genuinely expected Catalyst level performance. ;)
agd5f: NForce25: try radeon-rewrite branch of mesa
NForce25: mjr: i thought r400 and r500 are at the same level...
NForce25: nanonyme: with my card i have 50 fps with catalyst and only 15 with xf86-video-ati :(
mjr: r300-400 have had attention for quite a while longer
NForce25: agd5f: how could i get it?
mjr: (there was an RE-based effort going on before the specs came)
nanonyme: NForce25: With which game? :)
NForce25: urban terror)
NForce25: mjr: could i expect things to get better in the future?
NForce25: and that "radeon rewrite branch of mesa" thing, does it help?
MostAwesomeDude: agd5f: So how goes the KMS+GEM+CS for r600+? Is there a lot missing still?
glisse: MostAwesomeDude: i will get to that shortly, once i figure out r100/r200/r300 bugs
glisse: well cliprect bugs
MostAwesomeDude: glisse: Awesome. I imagine it wouldn't take too much work, but I'm not the expert in that field.
MostAwesomeDude: As soon as you've got it up, I'll add r6xx to Galllium.
MostAwesomeDude: *Gallium, even.
glisse: MostAwesomeDude: kms bits will be easy, also i think agd5f already have much of the cs ioctl working in non kms case so it should be easy in kms case
ZitZ: hi agains, what might cause a motherboard to not work with a graphics card only with dri enabled?
MostAwesomeDude: glisse: That's good. I only need GEM for softpipe.
MostAwesomeDude: CS will be needed for the r600 pipe, but I'm not gonna start on that for a while.
glisse: MostAwesomeDude: i will start on gallium as soon as i can too
glisse: my time would still be devoted to fix bugs as a priority :)
MostAwesomeDude: There's plenty of r300g bugs if you like. :3
MostAwesomeDude: But I think that I can finish r300 by myself. I'm slow though.
NForce25: guys, how about rs690 bugs? ^^
MostAwesomeDude: NForce25: osiris has fixed a lot of bugs recently in the radeon-rewrite branch of Mesa.
NForce25: could you give me more info about that branch?
telexicon: is it normal for scrolling or painting of scrolled windows to be so slow that you can see it being painted and it uses 100% CPU?
glisse: osiris__: do you know a combination of xserver/mesa where dri1 works ?
MostAwesomeDude: NForce25: airlied is rewriting the r100, r200, r300 drivers, and osiris has been stamping out rs690 bugs.
glisse: ie no problem when moving glwindow or other window above
telexicon: MostAwesomeDude, rewriting the r100 driver?
telexicon: MostAwesomeDude, why?
telexicon: for GEM/DRI2 and all that?
glisse: is not that much of a rewrite
MrCooper: glisse: see the patch at http://bugs.freedesktop.org/show_bug.cgi?id=21653
telexicon: oh :(
telexicon: why not a rewrite?
MostAwesomeDude: telexicon: Because there's a bunch of code duplicated between them, and we're trying to make it more slimmed down and shared. Also DRI2, CS, GEM, FBOs, etc.
telexicon: oh and UXA
telexicon: or, is UXA still going to go back to EXA?
nanonyme: UXA is just for Intel.
telexicon: i thought UXA was just EXA enhanced for GEM?
nanonyme: Won't be going back to EXA and radeon won't be moving to it.
telexicon: oh ok
glisse: MrCooper: i am on r200 at the moment
MostAwesomeDude: UXA has nothing to do with Radeons.
telexicon: is there a way to fix window scrolling?
telexicon: i mean, what is it that causes all that lag?
MrCooper: glisse: doesn't matter, the fbconfig generation code is shared between all Radeon drivers
MrCooper: telexicon: you'll need to be slightly more specific than 'window scrolling'
MostAwesomeDude: telexicon: Fx3, right?
telexicon: MostAwesomeDude, not just fx3, but also xchat, gnome-terminal, pdf reader
telexicon: scrolling in vim in gnome-terminal is like 2 seconds per frame
MostAwesomeDude: telexicon: Which Xserver? Probably 1.5 or odler, right?
MostAwesomeDude: *older, even.
telexicon: X.Org X Server 1.6.0
MostAwesomeDude: And your DRM works? And you're using EXA?
telexicon: glxinfo | grep direct -> direct rendering: Yes
MostAwesomeDude: Fx3 can be fixed; turn off smooth scrolling in your preferences.
MrCooper: xchat just renders text very inefficiently, but gnome-terminal and evince are snappy here
telexicon: smooth scrolling is off
nanonyme: telexicon: That string mostly means nothing.
telexicon: MrCooper, yeah ive noticed its more efficient than other things
telexicon: but gnome-terminal is very choppy
telexicon: nanonyme, oh ok, Xorg.0.log then?
nanonyme: Yeah, that'd be a quite sure place to check.
glisse: MrCooper: btw do you happen to know if age count retrieval through scratch reg written back by the gpu ever worked in userspace ?
telexicon: "(II) GLX: Initialized DRI GL provider for screen 0" ?
telexicon: "(==) RADEON(0): Using EXA acceleration architecture"
nanonyme: Well, then you have EXA.
telexicon: (II) Module radeon: vendor="X.Org Foundation" compiled for 1.6.0, module version = 6.12.2
MrCooper: glisse: yeah, I had it working at some point
nanonyme: Which card was this again, btW?
MrCooper: never got around to cleaning it up and integrating it
telexicon: nanonyme, 01:00.0 VGA compatible controller: ATI Technologies Inc Radeon Mobility M6 LY
MrCooper: didn't seem to be a significant win anyway
MrCooper: telexicon: you probably have too little video RAM for EXA, try XAA
telexicon: oh :(
nanonyme: MrCooper: Oh, that can actually happen? Sounds bad if the idea is to eventually drop XAA. :p
NForce25: how much video ram is needed for exa?
telexicon: i thought i had 16MB
MostAwesomeDude: nanonyme: Xserver doesn't feel bad about chewing through your VRAM. :3
osiris__: glisse: don't know. I don't remember when I have run other than radeon-rewrite branch of mesa last time.
MrCooper: nanonyme: it'll be better with kernel graphics memory management
telexicon: well, im running compiz fusion atm (though ive been talking about it being slow when its off)
nanonyme: MrCooper: Could KMS+mm improve his situation then too?
telexicon: mostly because it makes switching windows faster
MrCooper: nanonyme: sure, that's kernel graphics memory management
telexicon: i wonder if i could try enabling that, whats required to enable those things? i have a pretty new kernel (2.6.30)
nanonyme: telexicon: It's not in any kernel yet.
telexicon: oh ok
nanonyme: Or rather any released one.
nanonyme: It'll hopefully get into 2.6.31.
telexicon: ok, well ill try XAA and see what happens
telexicon: thanks for the suggestion
nanonyme: If you're comfortable with compiling kernels, could try that DRI2 guide from glisse's blog.
nanonyme: This was an earlier card than r6xx, right?
telexicon: er, RV100
nanonyme: http://jglisse.livejournal.com/1822.html here's a guide on the new KMS+mm stuff as far as I've understood
nanonyme: It's mostly focused on DRI2 (which helps with eg Compiz) but it should set you up with KMS+mm in kernel and an X driver that's aware of them.
nanonyme: (Which is the thing that could help)
telexicon: ok, ill try that
telexicon: interesting, ok XAA is much much faster
ZitZ: where can I find information on what kind of power supply would be necessary for a x1650 pro? how many amps do I need on th 12v rail?
Zajec: what is coherenc output?
rah: how do I designate a monitor as being the primary monitor in xorg.conf?
mvc1741: hello, I was testing the radeon rewrite package for ubuntu in jaunty but after almost 40 min playing in zsnes the cpu heated too much and the laptop shut down suddenly