- User Since
- Nov 7 2013, 8:47 AM (193 w, 2 d)
Mon, Jul 17
Fri, Jul 14
Wed, Jul 12
Tue, Jul 11
This is fine. At least before we have a better solution - I'd go with diff applied on top of the yumrepoinfo downloaded from repos, or installed from package, so we don't even need to keep the local copy, which can get outdated, and still not fail the checks.
Well, I'm not fundamentally absolutely against it, but I'd be happier with you using INFO (which is already present) with "test was skipped" in note - especially if you say that it's supposed to be a pass anyway.
The intended semantics of the result values is
PASSED - everything ok, no questions asked
INFO - the same as PASSED for automation purposes, but a yellow flag for human consumers
NEEDS_INSPECTION - treated as FAILED in automation, red flag for human consumers
FAILED - something really went sideways
Fri, Jun 30
for future ref, the reasonable way to get around syncing requirements.txt and setup.py is adding:
+ with open('requirements.txt') as fd: + install_requires = [l.strip() for l in fd.readlines() if l.strip() and not l.strip().startswith('#> ... + install_requires=install_requires,
@mjia BTW what exactly is your use-case, and why is installing from repos not OK for you? I'm asking for two reasons
- packaging resultsdb for PyPi is proving to be rather a PITA
- if you actually need to use resultsdb, setup is necessary - like creating the database, for example - since you are going to need to solve this too (probably are right now anyway) is installing from repos (aka another line in that script that you'll have to have to setup resultsdb anyway) a problem?
Hi, @mija, I don't have any fundamental issues with it, apart of the fact that it adds yet another place to store/track deps.
Tue, Jun 27
Looks good. I fixed a non-issue lint error and merged the patch. Thanks!
Mon, Jun 26
Some minor issues, mostly WRT keeping the new code in the same style as the already existing pieces. My only actual functional issue here is that I'd probably just plain allowed _sort to manipulate submit_time - other fields of the Result object are IMO not really that relevant for custom-sorting, and the "exciting" user data (which I suppose could be more interesting to sort by) is stored in ResultData (and this inaccessible by this patch) anyway.
Jun 20 2017
On top of the failing tests, some of which are really weird, I suggest you investigate/fix ASAP, I have some mostly semantical issues.
this is fine
If the rpm builds/installs and migrations keep working, it's fine with me. Have you tried building/installing the package, and running resultsdb [init_alembic, init_db, upgrade_db] commands? I'm not really sure how this works, but it seems like the script_location is a path that needs to work from the location from which the migration is executed. I can see how this works when you run it from inside the repo, but I'm not sure whether this will be the same for the app installed from the package.
Jun 19 2017
Jun 14 2017
Just fyi - I don't have, and won't have any input, comments on this. Feel free to merge once you decide what you wanna do.
Jun 13 2017
Jun 12 2017
The patch overrides all the buildsteps we use - since we want every step to report on its progress, we ought to change how they behave, base or not. (I'm not sure I understand what you mean, though).
May 26 2017
Who am I to say what works, and what does not :) I guess you tested it with your usecase, so feel free to merge!
May 23 2017
Well, I'm not a huge fan of this - since not all the commands support the --format flag. This will most definitely have the same issues as testcloud does, but if the goal is to really have it now...
May 18 2017
May 16 2017
Looks good for a WIP. My concern here is, that with disposable minions, we do a thing where we use "the right fedora version", so e.g. fc24 packages are tested on F24 machine, and so on. I'd like to see the same for Docker - it can even be done quite easily. Not that it needs to happen for PoC, but I'd like at least a big fat "TODO/FIXME" somewhere in the code to remind you of that :)
Looks good as for a WIP to me. My only real concern here is the broad deletion of all the minion-related code - this might be because I'm not familiar with the design details, but WRT the disposable "minion" - what is the plan for choosing the "right" image for the task (aka using F24 to test F24 packages) that we had in the "previoius" code? Are we dropping the feature alltogether, or is there another process being planned to do that? If so, what is it?
Apr 20 2017
What would be the usual search queries?
Apr 3 2017
I'd like to have the task['repo'] changed to task['git_repo'] for the sake of consistency, but it is not a huge issue. Looks good other than this nitpick.
Mar 30 2017
Apart of the comments below, I'd like to see the testsuite updated to reflect the changes, I don't see how this could pass unittests.
Mar 24 2017
I'd like to see some inline comments, but looks good otherwise
Mar 21 2017
Mar 14 2017
Mar 8 2017
D1150 will need adjustments, but that's a non issue.
Fine with me
Mar 3 2017
Feb 23 2017
Feb 22 2017
https://pagure.io/fedora-qa/lolz_and_roffle - parses template file from openqa for priorities, and shows whether compose identified by compose id is valid alpha/beta/final and a list of failures in each category. Does what you want, does not need to be updated with tests added to openqa.
The only tricky part is handling different priorities for different composes/images on the same testcases (like i386 failures are all basically marked as optional, and so is anything Atomic now). Also, some of your testcases do not report arch (the upgrades as far as I checked) to resultsdb, but only have 64bit in testcase name - seems like a bug, but it is not really relevant to the current state of things.
Example output: https://paste.fedoraproject.org/paste/ar8ur1Pq5iDT7lmoso~zOF5M1UNdIGYhyRLivL9gydE=
Said tool took about two hours to write, and the worst part was figuring out parsing the perl file, and handling the different priorities you have for the same testcases (so, the actual testplan-management-y stuff). Yay for the mighty resultsdb, I guess..
Feb 21 2017
Nice one, here. You had no problems putting together random scripts in the past... ;)
Well, I know I told you this already, but just to reiterate - I do not believe that this is the right place and/or way to store "what is important for this one group" information. The reason being that this is a _policy_ that is by definition _prone to change_. What if there is a decision to change the priority - you can not retrospectively change the values in RDB...
The way to do this is _in a separate system_ - I know you don't like it, and call this a "short term hack" but we all well know that short term hacks have the tendency to stay (and even worse, in this case, also set a standard).
These kind of issues should be handled by an external system, that consumes the raw resutlsdb data, and has the "testplans" (because, honestly, this is nothing more than a poor man's testplan implementation) configured in it.
If it is what it takes, I'm even willing to put together such a tool, that lets you configure what kinds of results, based on metadata (testcase, extradata) are to be taken for, say, "Alpha priority of fedora compose workflow". Thinking about it, we even have a base for that system in the Dashboards.
Feb 20 2017
Are there issues, still, or are we waiting for the changes to be deployed on all the instances? Dev seems to be working reasonably well for me.
Feb 16 2017
LGTM, thanks for taking care of this
Feb 14 2017
Also, could you please, next time, update the one revision you created in the first time, instead of creating new diffs, that are basically the same? Thanks!
Feb 13 2017
You could have updated this revision, instead of creating D1130, but I guess Phabricator is a bit complicated for the first time user.
Please abandon this revision (go down to "add comment section" and select "abandon" in the "action" field), so we don't have unnecessary duplicate.
Feb 11 2017
How about we just scratch depcheck in favour of rpmdeplint instead? @kparal did some result verifying two weeks back, and it produces the same results as Decheck, while not being a steaming pile of ***. With the added benefit of the testing tool not being "our" code.
I don't see value in trying to make decheck work now, honestly.
Thoughts? @kparal @mkrizek
Feb 10 2017
Cool, thanks. I did not see that page since it was first created :D
@adamwill absolutely just a courtesy - we are still talking conventions (like "you should provide an item" style convention, maybe, but still a convention), not hard rules, at least in the scope of the actual implementation.
Also, even if this was "the true way", I'd still see a reason to have "I don't care, and just want a random UUID" thing, and the UUID is also a nice identifier, as it has constant length and such things that only really matter for the machines, and humans don't care, but are important anyway.
Good thing about the actual implementation is, that there won't be collisions between the "random" and "specific" UUIDs by definition of how UUIDs work (different namespaces), so you don't even need to be concerned about "random group results" mixing up with the "proper" ones.
Feb 9 2017
Like it, thanks!
Feb 8 2017
@mprahl can you then please abandon the revision? I could do it for you, but that would mean me commandeering it first, which changes authorship. THanks!
- make it more git source agnostic
- store server
Feb 7 2017
@adamwill thing is, that the group's description is not unique (And IMO should not be), so even though I don't disagree with the need for sensible naming, nor with the fact that searching the names should be possible, I'm pretty certain that direct "all the results in the group with this pretty name" is a good query. The "name" is really just the human-readable description, and the identifier is the UUID.
From my POW, users should not need to query the results by group-name, but by what was tested (I'm open to discussion thought) in the result (item). If a more complex dashboard is needed, it IMO should be another piece of code than resultsdb itself. ResultsDB should stay pretty agnostic to what the data "mean", because once we start to heavily rely on a scheme meaning something, it stops being agnostic, and becomes a targeted solution.
This is once again not disputing the fact, that targeted solutions are great. I just firmly believe that those should be a different system consuming the data from ResultsDB, and presenting them in a more sensible way.
Does this make sense? I'm not absolutely sure we are talking about the same thing here, so I maybe commenting on something completely different than you had in mind.
Looks like a wrong version of the diff is uploaded, based on the comment. I'm fine with the changes overall, just make sure this is what you really wanted. (Consider this not a NACK but NEEDS_INSPECTION :) )
It realy is not required any more. Thanks for catching this.
Awesome, I like this. Looks like a reasonable middle way to solve the issues.
Thanks a lot for tackling this!
Feb 6 2017
@kparal honestly, at this point, would it not be better, if you just spun-up an 'empty' virtual machine, and figured out all the deps? Not that I have huge issues with polluting my system with random -dev packages (and I sure could figure all of the out on my own, given enough time), but I don't see the benefit, honestly. Let's discuss in qa-devel.
OK, so more into the compilation fun, seems like openssl is missing now:
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory #include <openssl/opensslv.h> ^ compilation terminated. error: command 'gcc' failed with exit status 1
(Reposting the relevant bits of conversation from D1111)
Sure can do, but tell me once again, where will the make test actually be used? If it is supposed to be the 'primary' recommended way to run the testsuite for devs, then I'd object, honestly, as I don't see how compiling a bunch of libs is reasonable.
Feb 4 2017
@kparal maybe even too pristine?
No package 'libffi' found c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory #include <ffi.h> ^ compilation terminated.
Feb 3 2017
@kparal whatever works for you. I'd rather store the task name, than testcase name, though, as tasks can produce multiple testcases.
Feb 2 2017
Not commenting on the changes, as I can't get the makefile do what I need anyway. But regarding to the failing test.
The groups actually are usefull, but I agree that the naming could be better. The thing is, that the group UUIDs are provided by ExecDB, so we can tie the whole execution of a taskotron task together in a sensible way - i.e. all the results from the one taskotron job are in the group.
We sure could name them better (now we just take the task name as the identifier), but it really was not a priority, as the only stuff we really mostly care for is to be able to have a link like this: https://taskotron.fedoraproject.org/resultsdb/results?groups=32682314-e970-11e6-91d5-5254008e42f6 in execdb (the relevant execdb job is here, although the link to results is broken, as I forgot to update it with apiv2.0 in mind: https://taskotron.fedoraproject.org/execdb/jobs/32682314-e970-11e6-91d5-5254008e42f6 )