hexspritehow could I do a query to find all documents where doc.start==doc.end ?
hexspritelooks like $where does the trick.. anything faster?
victorqueirozI'm using GridFSBucket but when I "openDownloadStream()" with arguments -> start: 2031585 and end: 2097120 which end - start === 0xFFFF I got 108773 bytes result. Why's that?
aV_Vgood morning
aV_Vhow do I change uuid configuration for Java driver? I would want to change the default variant from v3 to v4
DerickaV_V: I am not sure how easy it is to change the default - as that's part of the driver itself
Derickhexsprite: using $where is a bad idea, as it pushes the query into JavaScript. Your best bet for performance is to pre-calculate the value, and store it separately, and index it so you can query it directly.
aV_VDerick: I've found this https://gist.github.com/anonymous/a740e8a2a1d2e3a37cf4 :)
aV_Vnow I'm looking what is the best way to store & get credentials
DerickaV_V: does that still work? As it's 2 years old, it could be for an old version
Dericknot sure about what you mean about credentials
aV_Vjust did the test and it isn't working, still getting legacy UUIDs :S
DerickaV_V: you must be able to set the right type though when you instantiate them?
DerickaV_V: with https://api.mongodb.com/java/3.0/org/bson/BsonBinary.html#BsonBinary-org.bson.BsonBinarySubType-byte:A- and https://api.mongodb.com/java/3.0/org/bson/BsonBinarySubType.html you should be able to do it
aV_VDerick: u suggesting on change from UUID type to BsonBinary ?
Derickyes, as that allows you control over the uuid subtype
Derick(MongoDB doesn't have a UUID type, it's just a subtype of BsonBinary)
aV_VOk. Let me think how I could do it so I don't would need to change my logic
aV_VI think it isn't viable solution for me, I must change a lot of thing to make it work. Actually, the configuration I linked before it works but I didn't tested properly. THe issue is that I must use new api MongoClient instead of the old one (MongoTemplate)
aV_VDerick: thanks for all
dalaran[ftdc] serverStatus was very slow: { after basic: 20, after asserts: 30, after connections: 30, after extra_info: 1180, after globalLock: 1400, after locks: 1400, after network: 1400, after opcounters: 1400, after opcountersRepl: 1400, after repl: 1400, after storageEngine: 1400, after tcmalloc: 1410, after wiredTiger: 1410, at end: 1410 }\r\n","stream":"stdout","time":"2017-06-27T09:04:38.669531547Z"}
dalaranHey, is this becuse i got problems with paging? We have done some changes to the indexing and the issues have gone away
dalaranbut im trying to figure out why they happened to begin with, and from what i understand the extra_info is connected to paging problems
Derickdalaran: seems possible...
Derickdalaran: but difficult to say with just that information
dalaranDerick: alright, i think ill let it run with the new indexing fixes for a while and we will see if it pops up again, been getting "Starting an election, since we've seen no PRIMARY in the past 10000ms" around the same time as well. So it seems to be connection issues
dalaranDerick: Thank you alot for your help :)
Derickdalaran: np :-)
menacehi, how difficult would it be to migrate a application which uses berkeleydb to mongodb. i have issues with non-atomic access on the berkeleydb and i hope the mongodb switch which apparently has the possibility of atomic read/writes could solve that problem...
Dericka straightforwards port from berkeleydb tables to mongodb collections should be simple, but you won't get some of the other benefits from MongoDB (such as a different schema)
menacebut i could do atomic transactions?
menacewithout using the different schema?
DerickMongoDB doesn't do transactions
menaceokay, perhaps wrong wording. are there atomic operations in the sense, that concurrent read/write operations on the same data for an single server instance do not create races with perhaps wrong results?
Derickthat can be tricky, depending on how you do define "wrong results"
menacewell, atm does my application berkeley db write and read data in threads from and to the database and after a certain point, its berkeleydb crashes and the database content is garbage. i want to avoid that with using mongodb
Derickoh, that ought to work fine with MongoDB :)
menacethat would be cool.